Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Apr 6.
Published in final edited form as: Int J Comp Psychol. 2015;28:http://escholarship.org/uc/item/8hg831n3.

Everywhere and everything: The power and ubiquity of time

Andrew T Marshall 1, Kimberly Kirkpatrick 1
PMCID: PMC5382961  NIHMSID: NIHMS785942  PMID: 28392622

Abstract

Anticipatory timing plays a critical role in many aspects of human and non-human animal behavior. Timing has been consistently observed in the range of milliseconds to hours, and demonstrates a powerful influence on the organization of behavior. Anticipatory timing is acquired early in associative learning and appears to guide association formation in important ways. Importantly, timing participates in regulating goal-directed behaviors in many schedules of reinforcements, and plays a critical role in value-based decision making under concurrent schedules. In addition to playing a key role in fundamental learning processes, timing often dominates when temporal cues are available concurrently with other stimulus dimensions. Such control by the passage of time has even been observed when other cues provide more accurate information and can lead to sub-optimal behaviors. The dominance of temporal cues in governing anticipatory behavior suggests that time may be inherently more salient than many other stimulus dimensions. Discussions of the interface of the timing system with other cognitive processes are provided to demonstrate the powerful and primitive nature of time as a stimulus dimension.

Keywords: time perception, interval timing, classical conditioning, instrumental conditioning, choice

Timing is Everything

Humans and a range of non-human animals are highly sensitive to the passage of time (Lejeune & Wearden, 1991), and this sensitivity goes beyond simply recognizing if an individual is late to a meeting, or being aware of whether a child counted to ten too quickly in a game of hide-and-seek. Timing is involved in activities as disparate as the millisecond differentiation of continuous language into distinct words and sentences (see Mauk & Buonomano, 2004; Tallal, Miller, & Fitch, 1993), and the activity of our body’s hormonal cycles depending on the time of day (see Hastings, O’Neill, & Maywood, 2007). Timing is distributed across a wide range of brain regions, and different circuits are involved in processing different time scales. The sense of time is also unique due to its omnipresent nature. While closing one’s eyes or covering one’s ears allows for blocking visual or auditory stimulation, one cannot escape the passage of time – it is “the primordial context” (Gibbon, Malapani, Dale, & Gallistel, 1997, p. 170). As a result, the passage of time is inherently involved in multiple cognitive processes that collectively permit individuals to progress and function in daily life.

The goal of this review is to illuminate the involvement of temporal processing in multiple cognitive phenomena, coupled with discussions of many cases where temporal cues dominate in controlling behavior. While there are several recently published reviews of interval timing (e.g., Allman, Teki, Griffiths, & Meck, 2014; Buhusi & Meck, 2005; Coull, Cheng, & Meck, 2011; Grondin, 2010), there are no recent comprehensive reviews discussing the involvement of interval timing mechanisms across such a broad range of other cognitive processes. Therefore, the primary objective of the current review is to synthesize research across a range of phenomena for which the common denominator is interval timing, thus providing evidence for the inherent contribution of timing processes to basic aspects of cognition. Indeed, such analysis may even result in the realization that temporal perception and anticipation may contribute in important ways to other more commonly studied mechanisms. Accordingly, an incidental effect of this review may be the provision of relevant paradigmatic information to those researchers interested in implementing timing-based analyses in their research. Ultimately, the current review argues for re-conceptualization of our understanding and study of the perception of time, and that such a phenomenon is not only a behavior to be analyzed, but an explanatory mechanistic factor that may ultimately account for behavior across multiple fields of psychological research.

Timing is Everywhere

Temporal processing extends broadly across cortical and subcortical brain circuits (Morillon, Kell, & Giraud, 2009). The primary neural system implicated in core interval timing processes is the cortico-striatal-thalamic circuit (e.g., Coull et al., 2011; Gibbon et al., 1997; Matell & Meck, 2004), depicted in Figure 1, which consists of the basal ganglia, the nigrostriatal pathway from the substantia nigra pars compacta (SNc) to the caudate/putamen (C/Pu), and the mesolimbic pathway from the ventral tegmental area (VTA) to the nucleus accumbens (NAC) and pre-frontal cortex (PFC). Additional key regions associated with interval timing include the supplementary motor area (SMA), premotor cortex, medial agranular cortex, and parietal cortex (e.g., Coull & Nobre, 2008; Leon & Shadlen, 2003; Matell, Shea-Brown, Gooch, Wilson, & Rinzel, 2011; Rao, Mayer, & Harrington, 2001; Schwartze, Rothermich, & Kotz, 2012).

Figure 1.

Figure 1

Cortico-striatal loops comprising the timing system. The “Shared Processes” box reflects other cognitive processes that have been associated with these same regions of the timing system identified in the diagram, suggesting a strong interaction between various cognitive phenomena and interval timing. SMA = supplementary motor area; PFC = prefrontal cortex; ACC = anterior cingulate cortex; PCC = posterior cingulate cortex; SNc = substantia nigra pars compacta; C/Pu = caudate/putamen; NAC = nucleus accumbens; VTA = ventral tegmental area; GPe = external segment of the globus pallidus; GPi = internal segment of the globus pallidus; TH = thalamus; STN = subthalamic nucleus.

Of greater intrigue is the overlap between areas involved in interval timing and those involved in basic cognitive and psychological functions. For example, in conjunction with an associated role in interval timing, the mesolimbic pathway has been implicated in prediction error learning/classical conditioning and incentive motivation (e.g., Berridge & Robinson, 2003; Schultz, Dayan, & Montague, 1997), and the PFC has been implicated in working memory, attention, and decision-making (e.g., Miller & Cohen, 2001). As a result, dysregulation of any of these phenomena has the potential to affect interval timing, and vice versa. Therefore, temporal processing may stand as one of the most fundamental cognitive mechanisms. Consistent with this idea is accumulating evidence suggesting that time serves as a powerful determinant of a range of cognitive processes.

Involvement of Timing in Fundamental Cognitive Processes

Timing is critical for many cognitive processes, at a minimum, due to the arrow of time (Eddington, 1928), which refers to the sensation of forward movement in time. The arrow of time provides a differentiation of the past from the future, which is essential for cognitive functions that involve ordinal sequencing, coincidence detection, causal inference formation, and any other time-dependent coding of information. Indeed, Stephen Hawking, in his highly influential book, “The Brief History of Time,” proposed that the arrow of time is the basis of all other intellectual capabilities due to its foundational nature in disambiguating the past, present, and future (Hawking, 1988). In other words, Hawking states that the arrow of time is the very foundation upon which our intellect is built. To the extent that this is true, then we would expect to see a major interface between the timing system and other cognitive systems, thereby reflecting the foundational role of the timing system in a broad range of cognitive processes.

Figure 1 provides an account of the most well documented shared systems, tying in those cognitive systems with the neural circuits discussed in the previous section. For example, motor sequencing and fine motor control are connected with timing in the basal ganglia and cerebellum (Ferrandez et al., 2003; Harrington, Lee, Boyd, Rapcsak, & Knight, 2004; Ivry & Keele, 1989; Lewis & Miall, 2003; Morillon et al., 2009; Stevens, Kiehl, Pearlson, & Calhoun, 2007; Tregellas, Davalos, & Rojas, 2006). The relationship between timing and motor control is apparent in Parkinson’s disease (see Ivry, 1996). Another shared process is counting and rhythm, which most likely relies on shared systems within cortical regions such as the prefrontal and parietal cortices (Harrington, Haaland, & Knight, 1998; Lewis & Miall, 2003; Morillon et al., 2009; Schubotz, Friederici, & Yves von Cramon, 2000). Incentive motivation, reward valuation, basic conditioning, and aspects of decision making emerge through shared processes involving reward valuation circuitry (VTA to NAC and SNc to striatum; see Galtress, Marshall, & Kirkpatrick, 2012; Kirkpatrick, 2014; Schultz et al., 1997). Attentional and executive functioning are associated with frontal cortico-striatal circuits, which are also implicated in regulating attention to time (Coull, 2004; Coull, Vidal, Nazarian, & Macar, 2004; Ferrandez et al., 2003; Grondin, 2010; Meck & Benson, 2002; Zakay & Block, 2004).

The emergence of timing from several distributed neural systems presents opportunities for an interface of timing with multiple cognitive systems, as well as the availability of those cognitive systems for participation in timing processes. These shared cognitive and neural systems create the possibility for reciprocal relationships to occur between timing and other cognitive functions. The core timing system is comprised of separate neural systems dedicated to different aspects of functioning including systems for circadian timing, short interval (< 2 s) and motor timing, interval timing in the seconds to minutes range, temporal sequencing and counting, and temporal decisions (Carr, 1993; Coull et al., 2011; Dibner, Schibler, & Albrecht, 2010; Mauk & Buonomano, 2004; Morillon et al., 2009; Reppert & Weaver, 2002). From here forward, the core interval timing system, or time collating system (Morillon et al., 2009), will be the primary focus of this paper. However, the other systems are worth mentioning in this broad sense to show that the principles that we discuss in the interval timing system may be more broadly applicable across the whole of the timing system.

The time collating system is responsible for automatically tracking durations of events, regardless of whether individuals are actively engaged in timing. This system allows for prospective timing, when attention is engaged, as well as retrospective timing, when individuals may not explicitly track time (Zakay & Block, 1997). The prospective/retrospective component of the time collating system is associated with both attention and working memory. Attentional switching or gating is proposed to regulate attention used for prospective timing (Lejeune, 1998; Zakay & Block, 1996) and working memory likely plays a key role in retrospective timing (Block & Zakay, 1997; Zakay & Block, 2004), as well as the maintenance of temporal information during prospective timing (Brown, 1997). It appears that working memory and interval timing may rely not only on the same brain regions (including the basal ganglia, VTA, SNc, and PFC), but also on the same basic neural coding mechanisms, in which information regarding stimulus identity is extracted from particular cortical neurons that are associated with the representation, and duration-related information is derived from their relative phase (Lustig, Matell, & Meck, 2005). Overall, it appears that timing and working memory are highly interrelated.

The aforementioned research reflects the intricate relationship between timing mechanisms and a range of cognitive phenomena, suggesting that methods to assess temporal processing capabilities are necessitated within a wide range of experimental psychologists’ toolboxes. Otherwise, multiple explanations for seemingly discrepant behaviors will perpetuate the literature, even though a more overarching explanation for these behaviors in terms of interval timing mechanisms may be realizable and ultimately achievable. Thus, to more closely examine the general reciprocal role of timing with other cognitive processes, three examples are explored here in greater depth: classical conditioning/prediction error learning (emerging from the mesolimbic pathway), goal-directed actions in schedules of reinforcement (emerging from the nigrostriatal pathway), and decision making (emerging from cortical regions such as the PFC, anterior cingulate cortex, and posterior cingulate cortex).

Classical Conditioning and Prediction Error Learning

In a number of ways, Pavlov (1927) recognized the importance of timing processes in classical conditioning. He regularly reported responding in time bins, an approach that is more typical in the timing literature than in the classical conditioning field, and he wrote about time as a conditioned stimulus (CS) in his discussion of temporal conditioning. Temporal conditioning involves presentation of an unconditioned stimulus (US; e.g., food) at regular intervals (e.g., 1 min). Over the course of training, conditioned responding develops during the US-US interval and responding is non-random. Instead, conditioned responses (CRs) increase in frequency or amplitude as a function of time in the US-US interval (e.g., Kirkpatrick & Church, 2003).

Simple conditioning effects

Given that classical conditioning involves the presentation of CSs and USs that unfold in time, it is not surprising that conditioning is inherently governed by temporal variables. For example, conditioning is most robust when the CS occurs before the US in paradigms such as delay conditioning, whereas conditioning is generally poor when the US occurs before the CS in backwards conditioning (Figure 2). This effect is known as priority in time and it is one of the most fundamental facets of conditioning. The arrow of time provides a means for differentiating the order of events in conditioning paradigms, and without it there would be no basis for the effects of priority in time on conditioning.

Figure 2.

Figure 2

The importance of time’s arrow in disambiguating the order of conditioned stimulus (CS) and unconditioned stimulus (US) deliveries in delay and backwards conditioning paradigms. Without a sense of the movement of time, it would not be possible to discern the ordinal difference between these two paradigms. The fact the delay conditioning results in superior conditioning compared to backwards conditioning in a wide variety of paradigms and species testifies to the importance of time’s arrow in basic conditioning.

Another important consideration is that CRs are timed appropriately at their earliest point of occurrence in a range of appetitive and aversive procedures in multiple species (Balsam, Drew, & Yang, 2002; Davis, Schlesinger, & Sorenson, 1989; Drew, Zupan, Cooke, Couvillon, & Balsam, 2005; Kehoe, Ludvig, Dudeney, Neufeld, & Sutton, 2008; Kirkpatrick & Church, 2000a; Ohyama & Mauk, 2001). The observation of CR timing at the start of learning indicates that learning to anticipate whether and when the US will occur (in relation to the CS) most likely emerge in parallel and at a similar point in conditioning. This is not surprising given that the brain circuits implicated in associative learning (particularly the mesolimbic and nigrostriatal pathways) are also involved in interval timing processes (e.g., Coull et al., 2011; Kable & Glimcher, 2007, 2009; Kirkpatrick, 2014; Morillon et al., 2009; Waelti, Dickinson, & Schultz, 2001). Essentially, timing and associative learning emerge in parallel because these forms of learning most likely originate from shared cognitive and neural systems.

Another important temporal variable in conditioning is that interval durations directly affect the strength and/or probability of CR occurrence in simple conditioning (Holland, 2000; Kirkpatrick & Church, 2000a; Lattal, 1999), and this relationship is observed regardless of the events that cue the onset of the interval. The relationship also holds when comparing responding during the CS and intertrial intervals (ITIs) of different durations (Jennings, Bonardi, & Kirkpatrick, 2007; Kirkpatrick, 2002). While the mean interval duration primarily affects response rate, the variability of interval durations affects the pattern of responding in simple conditioning procedures. For example, random intervals lead to generally constant rates of responding, whereas fixed intervals lead to increasing rates of responding over the course of the CS-US interval (Kirkpatrick, 2002; Kirkpatrick & Church, 1998, 2000a, 2003, 2004).

A final critical factor of simple conditioning is the duration of the CS (trial, or T) relative to the ITI duration (I), the I:T ratio. Larger I:T ratios, in which T is proportionately shorter than I, promote faster acquisition of CRs. The effect of I:T ratios on conditioning has been proposed to occur through the same mechanism that produces differences in response rates as a function of interval duration (Kirkpatrick, 2002; Kirkpatrick & Church, 2003). Alternatively, Rate Estimation Theory (Gallistel & Gibbon, 2000) proposes that the I and T durations are each associated with an information value that is determined by the rate and pattern of reinforcement, with conditioning determined by the ratio of information values. Ultimately, classical conditioning, in terms of CR rate, CR patterns, and speed of acquisition, is critically governed by I:T ratios as well as the individual I and T values. In other words, as the same set of parameter values elicit individual differences in classical conditioning (see Gallistel, Fairhurst, & Balsam, 2004), classical conditioning is conceivably driven by an individual’s sensitivity to the absolute and relative durations of the CS and ITI. In many ways, this is not surprising given that most classical conditioning paradigms involve learning about events that unfold predictably in time. Accordingly, further research on classical conditioning should incorporate individuals’ sensitivities to the passage of time as a key factor in the analysis of psychological mechanisms of simple classical conditioning phenomena. In cases where timing analyses have been incorporated, it is clear that animals time important events even when the associative contingencies should discourage responding altogether, such as in paradigms with zero or negative contingencies (Kirkpatrick & Church, 2004; Williams, Lawson, Cook, Mather, & Johns, 2008), verifying the importance of including such analyses in associative learning studies.

Cue integration and competition effects

In addition to these simpler forms of temporal effects on conditioning, complex learning involving multiple CSs is also strongly governed by temporal factors. One of the most striking effects comes from a substantial series of studies on temporal-map formation by Miller and colleagues. Figure 3A displays the design of the study by Cole, Barnet, and Miller (1995), in which a 5-s CS1 was paired with the US in either a delay or trace conditioning arrangement. In delay conditioning, the US immediately followed CS1 whereas in the trace condition, the US occurred following a 5-s gap. In a subsequent Phase 2, CS1 was immediately followed by a 5-s CS2 without any US deliveries, a second-order conditioning arrangement. The interesting facet of their design is that the delay condition resulted in better first-order conditioning in Phase 1, and thus should support better transfer of conditioning in Phase 2, according to associative learning principles. Instead, they found stronger conditioning to CS2 in the trace group. These and other related findings led to the proposal of the temporal encoding hypothesis (Arcediano & Miller, 2002; Savastano & Miller, 1998), which posits that the representations of CS1, CS2, and the US are combined into a temporal map. The temporal maps for the delay and trace groups are displayed in Figure 3B. In this instance, the representation of CS2 is in a forward contiguous relationship with the representation of the US, whereas in the delay condition, CS2 is in a simultaneous relationship with the US. Thus, CS2 leads to superior second-order conditioning in the trace conditioning arrangement due to the advantageous temporal map. Evidence for temporal map formation has been found in a range of conditioning paradigms including overshadowing, blocking, and conditioned inhibition paradigms (e.g., Barnet, Arnold, & Miller, 1991; Barnet, Cole, & Miller, 1997; Barnet, Grahame, & Miller, 1993; Barnet & Miller, 1996; Blaisdell, Denniston, & Miller, 1998; Cole et al., 1995). The formation of temporal maps indicates that time plays a fundamental role not only in simple conditioning, but also in the integration of key pieces of information across experiences.

Figure 3.

Figure 3

A. The design of a study on temporal encoding of durations by Cole, Barnet, and Miller (1995). The delay condition involved successive presentations of conditioned stimulus (CS) 1 and the unconditioned stimulus (US) in Phase 1 followed by CS1→CS2 presentations in Phase 2. The trace condition involved a 5-s gap between CS1 and US in Phase 1, followed by CS1→CS2 presentations in Phase 2. B. The proposed temporal map resulting from the exposure to the delay and trace conditions. Note that although the trace condition resulted in weaker conditioning in Phase 1, the resulting temporal map is more advantageous for promoting conditioned responding to CS2.

Timing also contributes to cue competition paradigms such as overshadowing (Pavlov, 1927), where two CSs of different properties are associated with the same US. Here, the more salient CS usually results in more robust conditioning. Interestingly, in accordance with the strong relationship between classical conditioning and interval timing, the temporal properties of the CSs interact with overshadowing. Specifically, weaker overshadowing occurs with variable duration CSs than with fixed duration CSs (Jennings, Alonso, Mondragón, & Bonardi, 2011) and with longer duration CSs compared with shorter duration CSs (Fairhurst, Gallistel, & Gibbon, 2003; Hancock, 1982; Kehoe, 1983; but see Jennings et al., 2007; McMillan & Roberts, 2010), consistent with the idea that both shorter and less variable CSs (i.e., CSs that should be timed with more absolute precision) may be more salient due to their higher information value in predicting the US (Balsam, Drew, & Gallistel, 2010).

Temporal variables also influence cue competition within blocking paradigms (Kamin, 1968, 1969), which involves pre-training with a CS1→US pairing followed by later CS1+CS2→US pairings. For example, shifts in CS1 duration between phases attenuate blocking in some cases (Barnet et al., 1993; Schreurs & Westbrook, 1982), consistent with the temporal encoding hypothesis (but see Kohler & Ayres, 1979, 1982; Maleske & Frey, 1979). The relationship between CS durations may also affect blocking, with a longer CS1 blocking a shorter CS2 (Gaioni, 1982; Kehoe, Schreurs, & Amodei, 1981), but not vice versa (Jennings & Kirkpatrick, 2006), while other studies have reported little or no asymmetry in blocking (Barnet et al., 1993; Kehoe, Schreurs, & Graham, 1987) or the opposite result with stronger blocking by a shorter CS1 (Fairhurst et al., 2003; McMillan & Roberts, 2010). Despite mixed results, timing processes appear to play a key role in cue competition, consistent with an interaction between timing and associative learning processes. Thus, fundamental learning processes in classical conditioning paradigms are explained, at least in part, by interval timing mechanisms. Indeed, given the preceding discussion of the overlap between timing and conditioning neural pathways (Figure 1), such interactions are far from unexpected. Yet, the involvement of interval timing in associative learning mechanisms and their corresponding interactions represent only a minority of studies within the associative learning literature. Thus, further research is clearly needed to disentangle the nature of these interactions at both the behavioral and neurobiological levels, providing more information as to the fundamental role of interval timing in many classical conditioning phenomena.

Schedules of Reinforcement

All schedules of reinforcement are time-based to varying degrees, as responses and reinforcers are emitted over time. Some schedules are explicitly time-based, including fixed time (FT), variable time (VT), fixed interval (FI), variable interval (VI), progressive interval, differential reinforcement of low rate (DRL), and fixed minimum interval (FMI) schedules. These schedules are diagrammed in Figure 4. FT and VT schedules do not require any responses, delivering outcomes after a fixed or variable criterion time, respectively. FT schedules are the same as temporal conditioning. FI and VI schedules require a response after a criterion time (t) since the last reinforcer. Progressive interval schedules are a variant on FI schedules, but with the criterion time incrementing from one reinforcer to the next. For example, a progressive interval 10-s schedule would start with a criterion time t (e.g., 10 s) and then increment by t for each subsequent interval. Finally, DRL and FMI schedules (e.g., Mechner & Guevrekian, 1962), or two-lever DRL schedules (Soffié & Lejeune, 1991), require response inhibition during select time periods. In a DRL schedule, individuals must wait for a criterion time between responses, and the clock resets if responses occur early (see the bold “R” in Figure 4). Thus, DRL schedules encourage the development of interresponse times (IRTs) that are longer than the criterion time. The FMI is a variant of the DRL where the response that starts the clock is different from the response required to obtain reinforcement. For example, a rat might have to press the left lever to initiate the interval and the right to collect the reinforcer. If the right lever is pressed too soon, then the interval resets.

Figure 4.

Figure 4

Common time-based schedules of reinforcement. In all procedures, events unfold in time, with specific delays marked by time, t. Food delivery is indicated by a filled circle. In schedules where food is response contingent, the time when food is primed is marked separately (^) from food delivery, and the time of the response that produces the food is similarly noted (R). Procedures involving discrete cues such as tones and lights are denoted by a signal marker (hatched bars). DRL = differential reinforcement of low rate; FMI = fixed minimum interval; Fl = fixed interval.

Given their inherent temporal structure, these aforementioned schedules of reinforcement have been widely used to study mechanisms of interval timing, ultimately revealing the intricate relationship between temporal processing and schedule-controlled behavior. One of the more common instrumental conditioning paradigms to study interval timing is the peak procedure (Roberts, 1981), in which discrete-trial FI trials are intermixed with peak trials (see Figure 4). On FI trials, a signal is presented and then food is primed after a target delay (e.g., 30 s). The first response after the prime results in food delivery and signal termination. Peak trials are the same as FI trials except that the signal is presented for longer than normal and responses have no consequence (i.e., there are no food deliveries). Responding on peak trials increases until the expected time of food delivery and then decreases thereafter (i.e., a “peak” in responding; Roberts, 1981). In addition, these peak functions display scalar variance (see, e.g., Leak & Gibbon, 1995), which is a hallmark of interval timing (Gibbon, 1977). Accordingly, anticipatory goal-directed behavior within schedules of reinforcement is strongly governed by time-based factors, suggesting that understanding subject-specific sensitivities to time may elucidate individual differences in various learning phenomena.

The peak procedure has also been implemented in the absence of any programmed response contingencies, thus producing a Pavlovian peak procedure. In the Pavlovian variant, a CS is followed by a US, usually in a delay conditioning arrangement (Figure 2), to induce CRs during the CS. Intermixed with the normal conditioning trials are peak trials in which the CS is presented for longer than normal. Figure 5 displays the results from a Pavlovian peak procedure from Kirkpatrick and Church (2000a; also see Balsam, Drew, & Yang, 2002). The response for the Pavlovian peak procedure was goal-tracking behavior, measured by the rate of responses to the food cup. The Pavlovian procedure yielded very similar patterns of behavior compared to the more common instrumental peak procedure (see, e.g., Roberts, 1981). The most striking difference is that interval duration has a more pronounced effect on response rate in the Pavlovian procedure, which most likely reflects the effect of the response contingency in maintaining response rates in the instrumental procedure. However, despite differences in response-reinforcer contingencies, the timing of responses is highly similar in the two procedures. While one explanation for these results is that different types of conditioning are similarly affected by temporal factors, it is more parsimonious to suggest that temporal processing and conditioning mechanisms are inseparably connected, such that conditioning and timing are not distinct phenomena, but two components of the same process. Accordingly, greater understanding of an individual’s temporal processing abilities will undeniably contribute to explaining that same individual’s ability to learn by association and trial-and-error.

Figure 5.

Figure 5

The rate of goal tracking responses (in responses per minute) as a function of time during peak trials in a Pavlovian peak procedure. Adapted from Kirkpatrick and Church (2000a).

In addition to studying behavior on fixed delay schedules, the peak procedure has been used to examine responding on variable interval schedules (Church, Lacourse, & Crystal, 1998). When intervals are uniformly distributed, so that there is an increasing hazard function (Evans, Hastings, & Peacock, 2000), response rates show a fairly characteristic peak, with the width of the peak dependent on the mean and variance of the uniform distribution (but see Harris, Gharaei, & Pincham, 2011). Thus, similar to what was discussed with Pavlovian conditioning, rats time even when intervals are variable in duration, and they are sensitive to fairly subtle differences in the distribution of intervals (Church & Lacourse, 2001). It is therefore clear that interval timing mechanisms are not only involved when the interval separating two events is fixed; instead, timing processes are broadly engaged whenever there are delays to important events, whether or not those delays are fixed.

While timing on most interval schedules involves examining response rates, the most common metric in DRL and FMI schedules is the distribution of IRTs (Droit, 1994; Ellen, Wilson, & Powell, 1964; Soffié & Lejeune, 1992). The distribution of IRTs has been shown to be roughly centered on the criterion time for reinforcement (Kramer & Rilling, 1970) or systematically later (Wearden, 1990; also see Balci et al., 2011), and IRTs become more variable as the criterion time increases (Jasselette, Lejeune, & Wearden, 1990; Richardson & Loughead, 1974; Sanabria & Killeen, 2008). As the IRT criterion is lengthened, the peak in responding may fall short of the criterion time, reflecting the increase in difficulty in inhibiting responding for longer durations (Doughty & Richards, 2002; Pizzo, Kirkpatrick, & Blundell, 2009; Richards, Sabol, & Seiden, 1993; Richards & Seiden, 1991). In that respect, performance on DRL/FMI schedules is similar to peak timing performance, where responses congregate around the time of reinforcement and their temporal variability is generally proportional to the time to reinforcement (Roberts, 1981). These schedules are also often used to measure response inhibition capacity (Bardo, Cain, & Bylica, 2006; Hill, Covarrubias, Terry, & Sanabria, 2012; Sanabria & Killeen, 2008) and depressive-like properties of behavior (O’Donnell & Seiden, 1983). Accordingly, these procedures provide potential explanatory links between inhibitory and interval timing processes. An example of this connection is shown in Figure 6, which displays IRT distributions for 10- and 30-s criterion as a function of sessions of training. In this study, rats were trained to lever press on DRL schedules with the two different criteria and their distribution of IRTs was measured over the course of training (Pizzo et al., 2009). The IRTs early in training were generally short, peaking at around 1–2 s, which is consistent with the bout-like nature of lever pressing (Shull, Gaynor, & Grimes, 2001, 2002; Shull & Grimes, 2003; Shull, Grimes, & Bennett, 2004). But, over time, the short IRTs were suppressed, while the IRTs surrounding the criterion time increased in frequency, which can be seen by comparing the early and late IRT functions for the two DRL criteria.

Figure 6.

Figure 6

Inter-response time (IRT) distributions in log-spaced bins as a function of sessions of training for differential reinforcement of low rate (DRL) criteria of 15 or 30 s. The early distributions were from sessions 1–2 of training, and the late distributions, from sessions 9–10. The vertical dashed lines denote the IRT criterion in effect for the DRL schedules. Adapted from Pizzo, Kirkpatrick, and Blundell (2009).

The short IRTs are an indicator of impulsive behaviors (Cheng, MacDonald, & Meck, 2006; Jentsch & Taylor, 1999; Peterson, Wolf, & White, 2003; Sanabria & Killeen, 2008) and difficulty in inhibiting short IRTs on tasks such as DRL schedules can serve as a marker of poor executive functioning (Solanto, 2002), further demonstrating the link between timing and executive processes. Therefore, the schedules of reinforcement that have been traditionally used to determine how well individuals can time the separation between their responses (i.e., DRL, FMI) may also provide significant insight into inhibitory and executive processes, ultimately reflecting the core connection between temporal processing and multiple fundamental cognitive phenomena. For example, deficits in executive functioning and inhibitory processes may be explained, at least in part, by deficits in temporal processing, a factor that deserves further attention in future research.

Translational applications

While much of the previously discussed research has been conducted in non-human animals, there are equivalent time-based schedules that are used for human participants including FI (Baron, Kaufman, & Stauber, 1969), DRL (Gordon, 1979; also see Çavdaroğlu, Zeki, & Balcı, 2014), and FMI (also known as a temporal production task; Bizo, Chu, Sanabria, & Killeen, 2006; Carrasco, Guillem, & Redolat, 2000). Additionally, variations on the peak procedure (Rakitin et al., 1998) have also been developed for measuring human timing. A more recently developed human timing task, called the beat-the-clock task, is designed so that participants are rewarded for making a single response as close as possible to the end of an interval; the more closely they respond to interval termination, the greater the reward (Simen, Balcı, deSouza, Cohen, & Holmes, 2011). In general, human performance on all of the tasks with animal analogs closely mirrors timing behavior in non-human animals, although humans often show slightly less variability in their timing (Buhusi & Meck, 2005). However, when experimental instructions are removed and consummatory responses are required, humans then behave much like non-human animals in various schedules of reinforcement (see Matthews, Shimoff, Catania, & Sagvolden, 1977). Thus, future experiments analyzing various cognitive phenomena in human participants could easily include an evaluation of temporal processing capabilities, as individual or group differences in other cognitive behaviors may be elucidated and perhaps even explained by individual differences in interval timing. This would be beneficial for determining whether the interface between the timing system and other systems, such as executive functioning, holds across species.

Decision Making

Arguably, it may be said that daily life is a continual experience of and selection between multiple concurrent (i.e., simultaneously presented) schedules of reinforcement, reflecting the classic problem of behavioral allocation. Accordingly, given the innate relationship between temporal processing and multiple fundamental cognitive phenomena within schedules of reinforcement, it should not be surprising that temporal processing is critically involved in choices between schedules of reinforcement that unfold in time. Indeed, choice behavior in the laboratory is typically studied with more complex instrumental conditioning schedules that involve two or more concurrently available options with potentially different outcomes. As with simpler conditioning procedures, many choice procedures involve events that unfold in time, and differences in delays may form a vital component of the decision process. In these cases, one would expect to see a central function for timing processes in decision making.

For example, impulsive choice (also referred to as delay discounting or intertemporal choice) procedures involve delivering choices between a smaller reward that is available sooner (the SS outcome) versus a larger reward that is available after a longer delay (the LL outcome). Choices of the SS are usually a marker of impulsive choice behavior, particularly when choosing the SS results in relatively fewer overall rewards earned. Thus, impulsive choice involves a trade-off between delay and amount. The predominant model of impulsive choice is the hyperbolic discounting model, V = A/(1+/kD), which proposes that subjective value, V, decays from the veridical amount, A, as a function of delay, D, a process known as delay discounting. The rate of discounting is determined by the parameter k, and impulsive choice//k-values have been proposed to serve as a stable trait variable in both humans (Baker, Johnson, & Bickel, 2003; Jimura et al., 2011; Johnson, Bickel, & Baker, 2007; Kirby, 2009; Matusiewicz, Carter, Landes, & Yi, 2013; Ohmura, Takahashi, Kitamura, & Wehr, 2006; Peters & Büchel, 2009) and rats (Galtress, Garcia, & Kirkpatrick, 2012; Garcia & Kirkpatrick, 2013; Marshall, Smith, & Kirkpatrick, 2014).

Recent research has begun examining the underlying mechanisms of impulsive choice and there is growing evidence implicating core timing processes as a key underlying cognitive component to impulsive choice behavior. In humans, impulsive individuals overestimate interval durations (Baumann & Odum, 2012) and exhibit earlier start times on FI schedules (Darcheville, Rivière, & Wearden, 1992). Similarly, interval timing mechanisms have been implicated in delayed gratification tasks (McGuire & Kable, 2012, 2013), in which an individual can choose to accept a smaller reward at any time prior to the availability of a delayed larger reward. Specifically, decisions to forego waiting longer for the larger reward have been suggested to reflect the subjective belief that the longer an individual has already waited for the larger reward, the longer s/he will have to keep waiting, whereas decisions to accept the smaller reward more quickly (i.e., wait for shorter durations before choosing the smaller reward) have been suggested to reflect faster internal clocks (see McGuire & Kable, 2013).

Additionally, research with rats has indicated that temporal discrimination/temporal precision abilities may play a key role in impulsive choice/delay discounting (Marshall et al., 2014; McClure, Podos, & Richardson, 2014). Specifically, McClure et al. (2014) found that the rats that timed with greater precision in a peak procedure (i.e., narrower peak functions) also made fewer impulsive choices. In addition, Marshall et al. (2014) tested rats on an impulsive choice procedure and conducted additional measurements of timing and delay aversion using a temporal bisection task (Church & Deluty, 1977) and a progressive interval schedule, respectively. Figure 7A displays the correlational patterns for individual rats in their study. Rats with higher standard deviations of their bisection functions, indicating poorer temporal discrimination, displayed greater impulsive choice (lower LL choices). These rats also displayed more delay intolerance, with lower breakpoints on a progressive interval schedule indicating that they gave up sooner when the delay was increased (Figure 7B). This pattern of results indicates a strong interrelationship between interval timing processes, delay tolerance, and impulsive choice, and suggests that core timing processes may play a critical role in the fundamental cognitive process of decision making in time-based tasks. Accordingly, future research on impulsive choice and decision making phenomena may be benefitted by real-time assessments of temporal processing ability, instead of simply reporting sensitivity to time through fitting choice data with modifications of the hyperbolic discounting model described above (Myerson & Green, 1995).

Figure 7.

Figure 7

A. The relationship between the log odds of larger-later (LL) choices, an indicator of impulsivity (with lower scores indicating more impulsive choices) and the standard deviation (σ) of the temporal bisection function (with higher values indicating poorer timing precision/temporal discrimination). B. The relationship between the log odds of LL choices and the progressive interval breakpoint indicating delay tolerance, with higher breakpoints associated with greater delay tolerance. Overall, poorer timing precision, poorer delay tolerance, and greater impulsive choice were inter-correlated. Adapted from Marshall, Smith, and Kirkpatrick (2014).

Following on from these findings, Smith, Marshall, and Kirkpatrick (2015) examined whether time-based interventions could mitigate impulsive choice by delivering FI, VI, and DRL schedules between a pre- and post-intervention assessment of impulsive choice behavior. All three schedules decreased post-intervention impulsive choices in normal (Sprague-Dawley) rats, and the effects of the FI and VI schedules were weaker and shorter-lived in Lewis rats, a potential animal model of disordered impulsive choice (Anderson & Diller, 2010; Anderson & Woolverton, 2005; Garcia & Kirkpatrick, 2013; García-Lecumberri et al., 2010; Huskinson, Krebs, & Anderson, 2012; Madden, Smith, Brewer, Pinkston, & Johnson, 2008; Stein, Pinkston, Brewer, Francisco, & Madden, 2012). The effects of the DRL schedule on choice behavior are shown in Figure 8A, where rats were tested for choices of a 10-s, 1 pellet SS versus a 30-s, 2 pellet LL before (PRE) and after (POST) the intervention. In addition to decreasing impulsive choices, the intervention also improved timing behavior, as shown in Figure 8B. Here, the main effects were to decrease the standard deviation (width) of the peak and increase peak rate. These effects on the peak are reflective of improvements in temporal discrimination/timing precision, consistent with the correlational patterns in Figure 7A. The combined results suggest that good temporal discrimination abilities are critical for being able to wait for outcomes that are delayed, which may ultimately relate to making more well informed choices. Accordingly, temporal processing abilities may explain individual differences in not only subjective optimality, but objective optimality in terms of rewards earned per unit time, reflecting the central notion of the innate involvement of temporal processing mechanisms in basic cognitive processes (i.e., decision making).

Figure 8.

Figure 8

A. The log odds of making larger-later (LL) choices during the pre- and post-intervention tests of impulsive choice. B. Response rates of lever pressing on peak trials as a function of time. Peak trials were administered during the choice task during the pre- and post- intervention tests of impulsive choice. Adapted from Smith, Marshall, and Kirkpatrick (2015).

The central involvement of interval timing in a cognitive phenomenon as fundamental as decision-making between differentially delayed outcomes has been supported by recent proposals that hyperbolic discounting is inherently rooted in the scalar property of interval timing. Specifically, the hyperbolic delay discounting model and the results from impulsive choice procedures indicate that increases in the delay to reward reduce the subjective value of reward. The function relating delay to subjective value is best approximated by a hyperbolic function. In his seminal paper, Gibbon (1977) derived a hyperbolic expectancy function for the expectancy of reward (h) over time (t) from scalar timing processes by proposing that: ht=H/(xt), where H is the incentive value of the reward, x, the expected delay to reward, and t, the time remaining until reward delivery. Gibbon’s solution proposes that the value of H, which is essentially the same as A in the hyperbolic discounting equation, is spread over the time remaining in the interval. This relates to the scalar property because shorter delays would result in comparatively less spread of value than longer delays. At the start of the delay (t=0), ht is equal to the overall reinforcement rate associated with that delay (amount/delay); as time proceeds in the interval, expectancy diminishes hyperbolically with the passage of time. It is worth noting that this formulation is a precursor to the later Gibbon and Balsam (1981) model that eventually led to the development of Rate Estimation Theory (Gallistel & Gibbon, 2000) described above. Thus, Gibbon’s solution is relevant not only to choice behavior, but also to reinforcer valuation in basic conditioning and learning paradigms. This connection demonstrates the fundamental interconnection between timing processes and a variety of basic learning phenomena.

Gibbon’s original account has been expanded upon more recently by Cui (2011). The key connection in Cui’s derivation is through Weber’s law, which is the fundamental tenet behind the scalar property of interval timing. Cui relies on the observation that the subjective judgment of quantities, in this case the amount of reward and the delay to reward, are determined by Weber fractions a and b, respectively. This means that both amount and time are subject to scalar variance. Drawing on this principle, the overall subjective value of a delayed reward, V, at time t, can be found given the following explicit solution: Vt = (1−a)ln(t)/ln(1+b), producing diminishing reductions in subjective value as a function of time, in accordance with hyperbolic discounting (see Mazur, 1987; Myerson & Green, 1995; Rodriguez & Logue, 1988). In this equation, time is logarithmically scaled [ln(t)], consistent with proposals that a logarithmic representation of time is responsible for hyperbolic discounting and the preference reversals in choice behavior that are observed at differential delays (see Takahashi, 2005; Takahashi, Han, & Nakamura, 2012; Takahashi, Oono, & Radford, 2008).

Thus, hyperbolic discounting is ultimately derived directly from the scalar property of interval timing, further linking timing and choice phenomena and providing an explanation for the relationship between temporal discrimination/precision and basic decision making. Indeed, some individuals with temporal processing deficits also exhibit decision making deficits, such as drug addicts (Bickel & Marsch, 2001; Wittmann, Leland, Churan, & Paulus, 2007), schizophrenics (Allman & Meck, 2012; Heerey, Robinson, McMahon, & Gold, 2007), and attention deficit hyperactivity disorder patients (Barkley, Edwards, Laneri, Fletcher, & Metevia, 2001; Toplak, Dockstader, & Tannock, 2006), further confirming these purported links.

Time as an Explanatory Mechanism

The research described thus far has demonstrated the role of core timing processes in a variety of time-based learning paradigms. If time is as fundamental as the aforementioned results suggest, then time might be expected to exert strong stimulus control, perhaps even superseding discrete cues such as auditory and/or visual stimuli. Such observations would further implicate the timing system as primitive and foundational for other cognitive functions.

The Dominance of Temporal Cues

One of the seminal papers demonstrating the power of time as a cue was conducted by Williams and LoLordo (1995). Within a fear conditioning paradigm, conditioning to a tone CS was blocked by temporal cues, but temporal cues were not blocked by the tone CS. This asymmetry suggested that time may govern behavior to a greater degree than a discrete cue such as an auditory tone within an aversive paradigm. Kirkpatrick and Church (2000b) extended the findings of Williams and LoLordo (1995) to an appetitive paradigm; temporal cues employed in initial training continued to control behavior more than subsequently-introduced auditory stimulus cues, even though the auditory cues were more contiguous with food delivery (also see Goddard & Jenkins, 1988). Therefore, a discrete stimulus that was more predictive of reward delivery was superseded in terms of stimulus control by a distant temporal cue (or, time marker).

Furthermore, Caetano, Guilhardi, and Church (2012) presented rats with three different FI schedules signaled by distinct stimuli (e.g., houselight → FI 30 s; white noise → FI 60 s; clicker stimulus → FI 120 s). In one experiment, each session involved 60 presentations of one of the stimulus-FI pairings, and each session employed a different stimulus-FI pairing relative to the prior session (e.g., Session 1: houselight → FI 30 s; Session 2: clicker stimulus → FI 120 s). At the end of each of the sessions, the rats’ timing behavior was indicative of sensitivity to the current FI schedule. However, despite the extensive training with each of the stimulus-FI pairings, the rats’ responding early in the sessions did not differ across FI schedules. In other words, the rats’ absolute response gradients across FI schedules were approximately superposed early in each of the sessions, but became more differentiated in accordance with the current FI schedule as the session progressed. Ultimately, these results suggest that the particular stimulus that reliability predicted the current FI schedule in each session did not control behavior as much as the time-based nature of the task itself did. In accordance with Williams and LoLordo (1995), Caetano et al. (2012) suggested that temporal memories regarding food delivery may exceed other relevant stimuli in salience. Therefore, in conjunction with the results described above, this result supports the strong control of time over behavior and the considerable involvement of temporal processing in multiple cognitive phenomena.

Simultaneous Temporal Processing

Based on the superiority of time-based cues, it could be assumed that with a greater number of effective time markers present, the uncertainty in estimating the time to reward delivery should substantially decrease. Previous research has shown that non-human animals, such as rats and pigeons, are able to combine information conveyed by multiple time markers to predict the time of food delivery (Church, Guilhardi, Keen, MacInnis, & Kirkpatrick, 2003; Guilhardi, Keen, MacInnis, & Church, 2005; Kirkpatrick & Church, 2000a; Leak & Gibbon, 1995; MacInnis, 2007; MacInnis, Marshall, Freestone, & Church, 2010; Meck & Church, 1984). Such time-based cue integration occurs in simultaneous temporal processing paradigms where overlapping intervals are signaled by different time markers (e.g., Church et al., 2003; Kirkpatrick & Church, 2000a; Meck & Church, 1984). The ability to combine multiple cues of temporal information without the consequence of information overload provides support for the involvement of temporal processing in basic cognitive phenomena.

While the analysis of interval timing has primarily focused on the timing of a single interval, simultaneous timing occurs in many natural and artificial situations. If a previous food delivery signals a 120-s interval before the next food delivery, then that food delivery serves as a time marker; the possible onset (and offset) of an embedded stimulus within the food-to-food interval is another time marker that serves as a more contiguous predictor of upcoming food delivery. While an initial prediction may be that animals should ignore the first time marker in favor of the second, more contiguous time marker, animals do not, but instead combine the temporal information provided by these distinct cues (Guilhardi et al., 2005; Kirkpatrick & Church, 2000a, 2000b). In fact, patterns of anticipatory responding become more complex with each additional time marker (e.g., Dews, 1965), so that embedded stimuli presented closer to reinforcement exert greater control over behavior than less temporally-proximate stimuli (see Fairhurst et al., 2003; Farmer & Schoenfeld, 1966; Snapper, Kadden, Shimoff, & Schoenfeld, 1975).

Despite the historical divergence of the conditioning and timing sub-fields of experimental psychology (see Kalafut, Freestone, MacInnis, & Church, 2014; Kirkpatrick & Church, 1998), the paradigms used in both conditioning and timing experiments converge in their use of multiple time markers. Given the simultaneous emergence of timing and associative learning, as described above, several conditioning phenomena may be more parsimoniously explained via simultaneous temporal processing. For example, recent evidence has suggested that the large differences in response rates traditionally observed in instrumental versus classical conditioning paradigms (Domjan, 2010) may actually be driven by differences in time markers rather than differences in either the type of learning or response contingency; that is, response non-contingent reinforcement in classical conditioning paradigms in itself serves as an effective time marker to reduce response rate until reinforcement (Freestone, MacInnis, & Church, 2013). Moreover, the evidence that larger I:T ratios produce faster acquisition in conditioning paradigms, as described above, may be at least partially explained by simultaneous temporal processing and time marker effectiveness in terms of the time marker’s temporal position relative to reinforcement (Holland, 2000; Kirkpatrick & Church, 2000a; Lattal, 1999). Given the necessity of learning for individual adaptation, such an explanation would therefore suggest that time is in fact a powerful stimulus (cf., Balsam et al., 2010). Furthermore, it suggests that, as described above, the perception of time is the fundamental cornerstone of basic cognition.

Such immediate time-based cues are far from the only sources of temporal information that an individual concurrently processes. For example, the stimulus-to-food interval is not only embedded within a food-to-food interval, but within the duration of the experimental session, the light:dark cycle, the day, month, year, and so on. Accordingly, given the evidence for the simultaneous timing of multiple intervals, it should come as no surprise that animals track durations beyond those experienced at the individual trial level. Wilson and Crystal (2012) showed that rats that were fed immediately following an experimental session within the operant chamber performed more poorly during a temporal bisection task as the time to the post-session feeding approached, compared to rats that were not provided an immediate post-session meal. Importantly, the former experimental group continued to show adequate temporal discrimination performance, while also exhibiting elevated rates of head-entry behavior in anticipation of the post-session feeding, whereas the latter control group did not show anticipatory behavior (see Wilson & Crystal, 2012). Interestingly, Wilson, Pizzo, and Crystal (2013) subsequently showed that presentation of a time-marker before the post-session meal produced a rapid increase in rats’ head-entry behavior, while the rats also exhibited adequate temporal discrimination performance. This steep increase in head-entry behavior given a late-onset time-marker is comparable to the within-trial patterns of responding observed in FI schedules when time-markers are embedded very proximal to the time of food availability (Guilhardi et al., 2005), suggesting an interesting overlap between the effects of time-markers on between- versus within-trial behavior.

Furthermore, Plowright (1996) implemented a procedure in which pigeons responded on a tandem VI 5 – FI 10 s schedule of reinforcement (i.e., the FI schedule began once the VI schedule was completed), with each component on a separate key. The procedure also scheduled a larger reinforcer at either 6 or 16 minutes into the 20-minute experimental session; the larger reinforcer was delivered contingent on completion of the VI 5 s schedule following either of these longer delays. Consistent with the results presented by Wilson and Crystal (2012), Plowright (1996) showed that pigeons accurately timed the FI 10 s schedule of reinforcement while also exhibiting increases in response rate on the VI 5 s schedule at approximately the expected time of the larger reinforcement. As simultaneous temporal processing analyses have typically involved the overlap of separate intervals in anticipation of the same reinforcer, the results from these studies are striking, emphasizing the simultaneous impact of time-based stimuli on behavior at both molecular and molar scales. Specifically, rather than an animal selectively attending to a single stimulus, temporal information from multiple sources is integrated so as to best predict upcoming events (i.e., reinforcement).

The ease with which simultaneous time markers are integrated may be due to the presence of multiple and independent temporal signatures in the brain. Buhusi and Meck (2009) presented rats with a modified tri-peak procedure. Each of three levers delivered food on an FI 10, FI 30, or FI 90 s schedule; on each trial, signaled by houselight onset, one lever was randomly selected to deliver food according to the FI schedule of reinforcement. On a subset of peak trials, the houselight signal was disrupted 15 s into the trial for 1, 3, 10, or 30 s (i.e., “gap” trials). A 10-s gap differentially shifted timing on the 10-s, 30-s, and 90-s peak trials, suggesting that the simultaneous timing of the three intervals was distinctly altered by the same 10-s gap. Buhusi and Meck (2009) concluded that the timing of the intervals was simultaneous but separate, so that multiple internal clocks independently timed each interval. Accordingly, simultaneous temporal processing may engage multiple clocks so that the information processing system does not become overloaded. Thus, our timing system may have adapted to account for a plethora of simultaneous time markers by employing multiple temporal signatures. Ultimately, the mechanisms of simultaneous temporal processing supply explanatory potential of the inherent involvement of timing mechanisms in fundamental cognitive processes that involve multiple events unfolding together in time.

Time as Information

Time-markers are assumed to convey temporal information; that is, information in terms of the expected time to reinforcement. A time-marker may be more effective, and thus convey more information, when the rate of reward signaled by that time marker exceeds that signaled by a second time marker. For example, in Rate Estimation Theory (Gallistel & Gibbon, 2000), the stimulus (i.e., time-marker) begins to control behavior when this ratio of reinforcement in the presence of the stimulus relative to that of its absence exceeds a given threshold (i.e., time marker strength is high). Furthermore, Delay Reduction Theory computes the difference between the food-to-food interval and the stimulus-to-food interval; the larger the difference, the greater the reward rate in the stimulus-to-food interval and the more the stimulus (i.e., time-marker) controls behavior (Fantino, Preston, & Dunn, 1993). Therefore, in conjunction with results describing differential stimulus control over behavior given differential CS-US contingencies (e.g., Rescorla, 1968), time-marker effectiveness may be quantified in terms of the information provided regarding changes in reward rate (also see Egger & Miller, 1962). Accordingly, the intimate involvement of interval timing in fundamental learning processes may be explained by the possibility that events in time represent the best signals for when an individual should adapt his or her behavior to most effectively interact with the environment.

A second conceptualization is conveyed by information theory (Cover & Thomas, 1991). Based on the pioneering work of Shannon (1948), information in a simple delay-conditioning paradigm is calculated as the difference in entropies of two distinct memory distributions as to when food will be delivered – one for the previous food (food-to-food interval) and one for the stimulus (stimulus-to-food interval)– plus a scalar variance factor (Balsam et al., 2010; Balsam, Fairhurst, & Gallistel, 2006; Balsam & Gallistel, 2009). Intuitively, this refers to calculating the reduction in uncertainty of the time when food will be available based on the addition of a second time marker (i.e., stimulus). Indeed, the application of information theory to classical conditioning can explain the blocking and overshadowing results of cue competition studies (Balsam & Gallistel, 2009), essentially in terms of how effective each stimulus is as a time-marker. Accordingly, learning is dependent on the length of the food-to-food and stimulus-to-food intervals (Balsam & Gallistel, 2009). In other words, learning is temporally-dependent, thus supporting the current proposal that the processing of time serves as a fundamental cornerstone in basic components of cognition as well as emphasizing the necessity to assess temporal processing within other cognitive tasks.

Pigeons and Time

Although many, if not all, species appear to be heavily reliant on timing systems, an interesting result that has perpetuated the literature is that pigeons seem to be especially sensitive to temporal cues and particularly apt to use those cues. One striking example is the mid-session reversal procedure, in which the reinforcement contingencies switch in the middle of a session. In the pigeon variant of the procedure, pecks to one key (e.g., red) result in reinforcement at the start of the session while pecks to the other key (e.g., green) do not. Then, half-way through the session, the contingencies reverse, a classic example of a win-stay/lose-shift contingency. The most efficient way to solve this task is to stay on the reinforced option until the first time that non-reinforcement occurs, and then switch, which is the strategy employed by rats (Rayburn-Reeves, Stagner, Kirk, & Zentall, 2013). Pigeons, on the other hand, anticipate the upcoming switch and also continue to make some responses on the non-reinforced option after the switch (Cook & Rosen, 2010; Laude, Stagner, Rayburn-Reeves, & Zentall, 2014; McMillan & Roberts, 2015; Stagner, Michler, Rayburn-Reeves, Laude, & Zentall, 2013), although the extent to which they do this may depend somewhat on other cues that are available (McMillan & Roberts, 2015) or on the ITI duration (Rayburn-Reeves, Laude, & Zentall, 2013). By performing special tests with ITI durations that were shorter or longer than the training ITI, McMillan and Roberts (2012) demonstrated that the pigeons were using the time from session onset to anticipate the switch in contingencies. This is a highly inefficient strategy which resulted in many errors on the pigeon’s part, and further testifies that timing cues often dominate even when superior local reinforcement cues are available. Essentially, pigeons would not be using temporal cues unless the use of these cues was adaptive. Accordingly, future research is needed to better understand individual- and species-specific sensitivities to temporal cues when the use of such cues is seemingly irrational but subjectively optimal.

A similar case of the dominance of temporal cues can be seen in response-initiated delay schedules. In these schedules, one cue signals trial onset (e.g., red) and another cue signals delay onset (e.g., green). Following food delivery, the trial onset cue is presented. The first peck to that cue results in the initiation of the fixed delay (T) to reward and its associated cue. The time between trial onset and the keypeck is denoted as the waiting time (t) and this delay is under the control of the animal. Therefore, in this schedule, the optimal strategy for the pigeon is to respond as soon as the trial onset cue is presented to minimize waiting time. Wynne and Staddon (1988; see also Wynne & Staddon, 1992) assessed performance in different variants of response-initiated delay schedules. In one condition, T was fixed regardless of t, the classic response-initiated delay schedule, and in other conditions T was adjusted to compensate for t to neutralize the cost of longer waiting times on the overall delay to reinforcement. Birds were tested under different values of T and the relationship between T and t was assessed. They found that waiting times were linearly related to the delay to reward, regardless of the contingencies in place, and that the distribution of waiting times was scalar. This means that the pigeons’ waiting times tracked the delay to reinforcement, even when doing so was sub-optimal. These results provide further evidence that pigeons possess a built-in timing mechanism that automatically tracks delays to reinforcement and adjusts behavior accordingly, even when doing so is costly.

Another example of dominance of temporal cues was shown by Roberts, Coughlin, and Roberts (2000), when testing pigeon’s use of timing versus number cues. Pigeons were presented with a variant on the peak procedure in which a flashing light was reinforced either after a certain number of flashes (a fixed number or FN schedule) or after a fixed amount of time (an FI schedule). The FN and FI schedules were cued with differently-colored lights, and pigeons received the two schedules intermixed. Following training, the pigeons received mixed test trials in which the counting cue would be turned on for 10 s and then would switch to the timing cue or vice versa. In the counting→timing test trial, the pigeons peaked at the FI value indicating that they were timing from the start of the trial even though they were cued to count at the start. On the timing→counting test trials, the pigeons switched from timing to counting when cued to do so. The results indicate that the pigeons timed regardless of cueing, but that they could count when cued to do so at the end of the trial. In essence, the pigeons counted when cued, but always timed. Rats, on the other hand, appear to be more balanced. Meck and Church (1983) trained rats with pulsing tones where either the number of pulses or the total duration of the tone pulses (2 pulses/2 s vs. 8 pulses/8 s) could serve as cues in a temporal bisection task. In a subsequent test with variations in either the pulse number or delay, it was found that rats showed strong control by both aspects and that the psychophysical functions for time and number were essentially indistinguishable. Thus, rats appeared to simultaneously time and count. The finding that both pigeons and rats “always time” may reflect the omnipresent aspect of time discussed earlier in the paper. However, where it appears that pigeons may be particularly sensitive to this aspect of time, additional research is warranted to better elucidate the comparative use of temporal cues.

Pigeons are also prone to exhibit oscillatory processes on peak procedures (Kirkpatrick-Steger, Miller, Betti, & Wasserman, 1996). Pigeons were trained on a standard peak procedure, with variations in the ratio of the peak trial durational: FI trial duration. When those durations were in a 4:1 ratio, the pigeons showed a strong oscillatory response, peaking at the FI and then again at 3 times the FI value. The second peak was often associated with a rate of response that was nearly as high as the original peak. Even more striking was the observation of longer-term oscillations when special peak trials were included that were 8 times the original duration. In this case, four peaks were observed at regular intervals, but with dampened oscillation so that the height of the peaks declined over time. These observations suggest that pigeons may engage in behavioral oscillations that reflect a strong underlying control by timing cues, further emphasizing the core involvement of temporal processing mechanisms in goal-directed actions. Thus far, similar results have not been shown in rats, suggesting a further divergence in control by temporal cues between these two species. Ultimately, our understanding of time and individual/species differences in temporal processing is still in its infancy. Further comparative, neuroscientific, and psychological research of differential cognitive mechanisms and their relationship to individual and species differences in interval timing processes reflects the most critical avenue in our understanding of brain and behavior mechanisms. Specifically, if the perception of time underlies fundamental aspects of basic cognition, then it should be a more widely studied and appreciated psychological process in future research.

What is Left to be Discovered

The pervasiveness of interval timing processes across brain and behavior mechanisms does not preclude the further exploration of such phenomena. Rather, it raises even greater questions. For example, there has been long-standing debate regarding whether mental representations of time are linearly-or logarithmically-scaled (Cerutti & Staddon, 2004; Church & Deluty, 1977; Gibbon, 1977; Gibbon & Church, 1981; Yi, 2009), a question that fundamentally addresses the function mapping objective time to subjective time, and yet there are no clear answers to this problem. Accordingly, different theories of behavior that are centered on or incorporate interval timing posit different representations of time. For example, a logarithmic representation of time is assumed within the Behavioral Economic Model (Jozefowiez, Staddon, & Cerutti, 2009) and the Multiple Timescale Model (Staddon & Higa, 1999), while a linear representation of time is assumed within Scalar Expectancy Theory (Gibbon, 1977, 1991), the Behavioral Theory of Timing (Killeen & Fetterman, 1988), and the Learning to Time theory (Machado, 1997). It is challenging to understand the mechanisms of interval timing if our theoretical accounts disagree on fundamental processes, such as how time is psychologically scaled. Thus, greater understanding of the mental representations of time will aid the convergence of distinct theoretical accounts into a centralized and comprehensive theory of behavior. That being said, the role of temporal processing in fundamental cognitive processes warrants the inclusion of an interval timing module in extant and future theories of many forms of cognition, some of which were discussed here, as such an inclusion may provide a more comprehensive theoretical account of behavior across multiple experimental manipulations.

A second open question involves the neurobiology and neuropathology of interval timing. There is no single disease that purely reflects a deficit in interval timing, although many neuropathologies are characterized by dysfunctional temporal processing (see Allman & Meck, 2012; Balci, Meck, Moore, & Brunner, 2009; Coull et al., 2011; Ward, Kellendonk, Kandel, & Balsam, 2012). By better understanding the basic facets of the timing system, as well as the intimate connection between timing and basic components of cognitive function, then perhaps those predisposed to such disorders could receive the necessary neurocognitive and pharmacological treatments to temporarily or permanently delay pathogenesis while individuals are pre-symptomatic. Indeed, interval timing deficits have been observed in schizophrenics and individuals at high risk for developing schizophrenia (Allman & Meck, 2012; Penney, Meck, Roberts, Gibbon, & Erlenmeyer-Kimling, 2005), suggesting that longitudinal evaluations of interval timing may also provide insight into disease prognosis.

As has been conveyed throughout this review, interval timing is closely intertwined with fundamental cognitive processes such as learning and decision making. Indeed, learning and decision-making deficits may reflect deficits in the processing and integration of sequential (or simultaneous) temporal events and durations. Accordingly, efforts to alleviate such deficits may be more effective if temporal processing is a part of the focus of therapeutic techniques. Moreover, individual differences in interval timing may ultimately explain individual differences in those mechanisms that are tied to suboptimal, maladaptive, and health-deficient behaviors (e.g., Marshall et al., 2014). For instance, if dysfunctional interval timing can explain impulsivity in drug abusers (see Wittmann & Paulus, 2008), then testing interval timing early in life may help identify those individuals prone to substance abuse, allowing targeted interventions prior to addiction. Given the prevalence of standardized testing, physical education/health programs in elementary and secondary schools could implement standardized assessments of basic cognitive functions, such as interval timing, thereby permitting the subsequent administration of individually-specific therapies to deter such potential inevitabilities. Overall, the interaction of the temporal processing system with the neurobiological correlates and psychological processes of attention, learning, incentive valuation, decision making, working memory, etc., suggests that greater understanding of the perception of time may ultimately unveil key answers to the questions that have continued to perplex both psychological scientists and neuroscientists alike.

Regardless of the task or paradigm, time is the single most ubiquitous component of each and every experimental procedure across multiple disciplines and sub-disciplines. The concept of time may refer to the dimension of a predictive stimulus, the function by which a person, action, or event changes, or, simply, the global experimental context, in which rests more tangible and discrete contexts. Such omnipresence and, ultimately, omnipotence is well-captured by the multitude of potential neurobiological correlates of temporal processing and the horde of cognitive mechanisms that involve interval timing. Accordingly, if temporal processing mechanisms contribute to other cognitive mechanisms, then psychologists, cognitive scientists, cognitive neuroscientists, and the like have the responsibility to administer assessments of timing abilities in subsequent behavioral experiments. Only then may a comprehensive and thorough understanding and theoretical account of the interaction between brain mechanisms and behavioral processes be achieved.

Acknowledgments

Some of the research described in this paper was supported by grant NIMH-R01 085739 awarded to Kimberly Kirkpatrick and Kansas State University.

References

  1. Allman MJ, Meck WH. Pathophysiological distortions in time perception and timed performance. Brain. 2012;135:656–677. doi: 10.1093/brain/awr210. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Allman MJ, Teki S, Griffiths TD, Meck WH. Properties of the internal clock: first- and second-order principles of subjective time. Annu Rev Psychol. 2014;65:743–771. doi: 10.1146/annurev-psych-010213-115117. [DOI] [PubMed] [Google Scholar]
  3. Anderson KG, Diller JW. Effects of acute and repeated nicotine administration on delay discounting in Lewis and Fischer 344 rats. Behavioural Pharmacology. 2010;21:754–764. doi: 10.1097/FBP.0b013e328340a050. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Anderson KG, Woolverton WL. Effects of clomipramine on self-control choice in Lewis and Fischer 344 rats. Pharmacology, Biochemistry and Behavior. 2005;80:387–393. doi: 10.1016/j.pbb.2004.11.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Arcediano F, Miller RR. Some constraints for models of timing: A temporal coding hypothesis perspective. Learning and Motivation. 2002;33:105–123. doi: 10.1006/lmot.2001.1102. [DOI] [Google Scholar]
  6. Baker F, Johnson MW, Bickel WK. Delay discounting in current and never-before cigarette smokers: similarities and differences across commodity, sign, and magnitude. Journal of Abnormal Psychology. 2003;112:382–392. doi: 10.1037/0021-843X.112.3.382. [DOI] [PubMed] [Google Scholar]
  7. Balci F, Freestone D, Simen P, deSouza L, Cohen JD, Holmes P. Optimal temporal risk assessment. Frontiers in Integrative Neuroscience. 2011;5 doi: 10.1177/0269881110364272. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Balci F, Meck WH, Moore H, Brunner D. Timing deficits in aging and neuropathology. In: Bizon JL, Woods A, editors. Animal Models of Human Cognitive Aging. Totowa, NJ: Humana Press; 2009. pp. 161–201. [Google Scholar]
  9. Balsam PD, Drew MR, Gallistel CR. Time and associative learning. Comparative Cognition & Behavior Reviews. 2010;5:1–22. doi: 10.3819/ccbr.2010.50001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Balsam PD, Drew MR, Yang C. Timing at the start of associative learning. Learning and Motivation. 2002;33:141–155. doi: 10.1006/lmot.2001.1104. [DOI] [Google Scholar]
  11. Balsam PD, Fairhurst S, Gallistel CR. Pavlovian contingencies and temporal information. Journal of Experimental Psychology: Animal Behavior Processes. 2006;32:284–294. doi: 10.1037/0097-7403.32.3.284. [DOI] [PubMed] [Google Scholar]
  12. Balsam PD, Gallistel CR. Temporal maps and informativeness in associative learning. Trends in Neurosciences. 2009;32:73–78. doi: 10.1016/j.tins.2008.10.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Bardo MT, Cain ME, Bylica KE. Effect of amphetamine on response inhibition in rats showing high or low response to novelty. Pharmacology Biochemistry and Behavior. 2006;85:98–104. doi: 10.1016/j.pbb.2006.07.015. [DOI] [PubMed] [Google Scholar]
  14. Barkley RA, Edwards G, Laneri M, Fletcher K, Metevia L. Executive functioning, temporal discounting, and sense of time in adolescents with attention deficit hyperactivity disorder (ADHD) and oppositional defiant disorder (ODD) Journal of Abnormal Child Psychology. 2001;29:541–556. doi: 10.1023/A:1012233310098. [DOI] [PubMed] [Google Scholar]
  15. Barnet RC, Arnold HM, Miller RR. Simultaneous conditioning demonstrated in second-order conditioning: Evidence for similar associative structure in forward and simultaneous conditioning. Learning and Motivation. 1991;22 doi: 10.1016/0023-9690(91)90008-V. [DOI] [Google Scholar]
  16. Barnet RC, Cole RP, Miller RR. Temporal integration in second-order conditioning and sensory preconditioning. Animal Learning & Behavior. 1997;25:221–233. doi: 10.3758/BF03199061. [DOI] [Google Scholar]
  17. Barnet RC, Grahame NJ, Miller RR. Temporal encoding as a determinant of blocking. Journal of Experimental Psychology: Animal Behavior Processes. 1993;19:327–341. doi: 10.1037/0097-7403.19.4.327. [DOI] [PubMed] [Google Scholar]
  18. Barnet RC, Miller RR. Temporal encoding as a determinant of inhibitory control. Learning and Motivation. 1996;27:73–91. doi: 10.1006/lmot.1996.0005. [DOI] [Google Scholar]
  19. Baron A, Kaufman A, Stauber KA. Effects of instructions and reinforcement-feedback on human operant behavior maintained by fixed-interval reinforcement. Journal of the Experimental Analysis of Behavior. 1969;12:701–712. doi: 10.1901/jeab.1969.12-701. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Baumann AA, Odum AL. Impulsivity, risk taking, and timing. Behavioural Processes. 2012;90:408–414. doi: 10.1016/j.beproc.2012.04.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Berridge KC, Robinson TE. Parsing reward. Trends in Neurosciences. 2003;26:507–513. doi: 10.1016/S0166-2236(03)00233-9. [DOI] [PubMed] [Google Scholar]
  22. Bickel WK, Marsch LA. Toward a behavioral economic understanding of drug dependence: delay discounting processes. Addiction. 2001;96:73–86. doi: 10.1080/09652140020016978. [DOI] [PubMed] [Google Scholar]
  23. Bizo LA, Chu JYM, Sanabria F, Killeen PR. The failure of Weber’s law in time perception and production. [Article] Behavioural processes. 2006;71:201–210. doi: 10.1016/j.beproc.2005.11.006. [DOI] [PubMed] [Google Scholar]
  24. Blaisdell AP, Denniston JC, Miller RR. Temporal encoding as a determinant of overshadowing. J Exp Psychol Anim Behav Process. 1998;24:72–83. doi: 10.1037/0097-7403.24.1.72. [DOI] [PubMed] [Google Scholar]
  25. Block RA, Zakay D. Prospective and retrospective duration judgments: A meta-analytic review. Psychonomic Bulletin & Review. 1997;4:184–197. doi: 10.3758/BF03209393. [DOI] [PubMed] [Google Scholar]
  26. Brown SW. Attentional resources in timing: Interference effects in concurrent temporal and nontemporal working memory tasks. Perception & Psychophysics. 1997;59:1118–1140. doi: 10.3758/BF03205526. [DOI] [PubMed] [Google Scholar]
  27. Buhusi CV, Meck WH. What makes us tick? Functional and neural mechanisms of interval timing. Nature Reviews Neuroscience. 2005;6:755–765. doi: 10.1038/nrnl764. [DOI] [PubMed] [Google Scholar]
  28. Buhusi CV, Meck WH. Relativity theory and time perception: single or multiple clocks? PLoS ONE. 2009;4:e6268. doi: 10.1371/joumal.pone.0006268. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Caetano MS, Guilhardi P, Church RM. Stimulus control in multiple temporal discriminations. Learning & Behavior. 2012;40:520–529. doi: 10.3758/s13420-012-0071-9. [DOI] [PubMed] [Google Scholar]
  30. Carr CE. Processing of temporal information in the brain. Annual Review of Neuroscience. 1993;16:223–243. doi: 10.1146/annurev.ne.16.030193.001255. [DOI] [PubMed] [Google Scholar]
  31. Carrasco MC, Guillem MJ, Redolat R. Estimation of short temporal intervals in Alzheimer’s disease. Exp Aging Res. 2000;26:139–151. doi: 10.1080/036107300243605. [DOI] [PubMed] [Google Scholar]
  32. Çavdaroğlu B, Zeki M, Balcı F. Time-based reward maximization. Philosophical Transactions of the Royal Society B. 2014;369:20120461. doi: 10.1098/rstb.2012.0461. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Cerutti DT, Staddon JER. Immediacy versus anticipated delay in the time-left experiment: A test of the cognitive hypothesis. Journal of Experimental Psychology: Animal Behavior Processes. 2004;30:45–57. doi: 10.1037/0097-7403.30.1.45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Cheng RK, MacDonald CJ, Meck WH. Differential effects of cocaine and ketamine on time estimation: Implications for neurobiological models of interval timing. Pharmacology, Biochemistry and Behavior. 2006;85:114–122. doi: 10.1016/j.pbb.2006.07.019. [DOI] [PubMed] [Google Scholar]
  35. Church RM, Deluty MZ. Bisection of temporal intervals. Journal of Experimental Psychology: Animal Behavior Processes. 1977;3:216–228. doi: 10.1037/0097-7403.3.3.216. [DOI] [PubMed] [Google Scholar]
  36. Church RM, Guilhardi P, Keen R, MacInnis M, Kirkpatrick K. Simultaneous temporal processing. In: Helfrich H, editor. Time and Mind II: Information Processing Perspectives. Gottingen, Germany: Hogrefe & Huber Publishers; 2003. pp. 3–19. [Google Scholar]
  37. Church RM, Lacourse DM. Temporal memory of interfood interval distributions with the same mean and variance. Learning & Motivation. 2001;32:2–21. doi: 10.1006/lmot.2000.1076. [DOI] [Google Scholar]
  38. Church RM, Lacourse DM, Crystal JD. Temporal search as a function of the variability of interfood intervals. J Exp Psychol Anim Behav Process. 1998;24:291–315. doi: 10.1037/0097-7403.24.3.291. [DOI] [PubMed] [Google Scholar]
  39. Cole RP, Barnet RC, Miller RR. Temporal encoding in trace conditioning. Animal Learning & Behavior. 1995;23:144–153. doi: 10.3758/BF03199929. [DOI] [Google Scholar]
  40. Cook RG, Rosen HA. Temporal control of internal states in pigeons. Psychonomic Bulletin & Review. 2010;17:915–922. doi: 10.3758/PBR.17.6.915. [DOI] [PubMed] [Google Scholar]
  41. Coull JT. fMRI studies of temporal attention: allocating attention within, or towards, time. Brain Res Cogn Brain Res. 2004;21:216–226. doi: 10.1016/j.cogbrainres.2004.02.011. [DOI] [PubMed] [Google Scholar]
  42. Coull JT, Cheng RK, Meck WH. Neuroanatomical and neurochemical substrates of timing. Neuropsychopharmacology. 2011;36:3–25. doi: 10.1038/npp.2010.113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Coull JT, Nobre A. Dissociating explicit timing from temporal expectation with fMRI. Current Opinion in Neurobiology. 2008;18:137–144. doi: 10.1016/j.conb.2008.07.011. [DOI] [PubMed] [Google Scholar]
  44. Coull JT, Vidal F, Nazarian B, Macar F. Functional anatomy of the attentional modulation of time estimation. Science. 2004;303:1506–1508. doi: 10.1126/science.1091573. [DOI] [PubMed] [Google Scholar]
  45. Cover TM, Thomas JA. Elements of Information Theory. New York, NY: Wiley-Interscience; 1991. [Google Scholar]
  46. Cui X. Hyperbolic discounting emerges from the scalar property of interval timing. Frontiers in Integrative Neuroscience. 2011;5:1–2. doi: 10.3389/fnint.2011.00024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Darcheville JC, Rivière V, Wearden JH. Fixed-interval performance and self-control in children. Journal of the Experimental Analysis of Behavior. 1992;57:187–199. doi: 10.1901/jeab.1992.57-187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Davis M, Schlesinger LS, Sorenson CA. Temporal specificity of fear conditioning: effects of different conditioned stimulus-unconditioned stimulus intervals on the fear-potentiated startle effect. J Exp Psychol Anim Behav Process. 1989;15:295–310. doi: 10.1037/0097-7403.15.4.295. [DOI] [PubMed] [Google Scholar]
  49. Dews PB. The effect of multiple SΔ periods on responding on a fixed-interval schedule: III. Effect of changes in pattern of interruptions, parameters and stimuli. Journal of the Experimental Analysis of Behavior. 1965;8:427–435. doi: 10.1901/jeab.1965.8-427. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Dibner C, Schibler U, Albrecht U. The mammalian circadian timing system: organization and coordination of central and peripheral clocks. Annual Review of Physiology. 2010;72:517–549. doi: 10.1146/annurev-physiol-021909-135821. [DOI] [PubMed] [Google Scholar]
  51. Domjan M. The Principles of Learning and Behavior. 6. Belmont, CA: Wadsworth; 2010. [Google Scholar]
  52. Doughty AH, Richards JB. Effects of reinforcer magnitude on responding under differential-reinforcement-of-low-rate schedules of rats and pigeons. Journal of the Experimental Analysis of Behavior. 2002;78:17–30. doi: 10.1901/jeab.2002.78-17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Drew MR, Zupan B, Cooke A, Couvillon PA, Balsam PD. Temporal control of conditioned responding in goldfish. J Exp Psychol Anim Behav Process. 2005;31:31–39. doi: 10.1037/0097-7403.31.1.31. [DOI] [PubMed] [Google Scholar]
  54. Droit S. Temporal regulation of behavior with an external clock in 3-year-old children: Differences between waiting and response duration tasks. Journal of Experimental Child Psychology. 1994;58:332–345. doi: 10.1006/jecp.1994.1038. [DOI] [Google Scholar]
  55. Eddington AS. The nature of the physical world. Cambridge, UK: Cambridge University Press; 1928. [Google Scholar]
  56. Egger MD, Miller NE. Secondary reinforcement in rats as a function of information value and reliability of the stimulus. Journal of Experimental Psychology. 1962;64:97–104. doi: 10.1037/h0040364. [DOI] [PubMed] [Google Scholar]
  57. Ellen P, Wilson AS, Powell EW. Septal inhibition and timing behavior in the rat. Experimental Neurology. 1964;10:120–132. doi: 10.1016/0014-4886(64)90089-5. [DOI] [PubMed] [Google Scholar]
  58. Evans M, Hastings N, Peacock B. Statistical Distributions. New York: Wiley; 2000. [Google Scholar]
  59. Fairhurst S, Gallistel CR, Gibbon J. Temporal landmarks: proximity prevails. Animal Cognition. 2003;6:113–120. doi: 10.1007/s10071-003-0169-8. [DOI] [PubMed] [Google Scholar]
  60. Fantino E, Preston RA, Dunn R. Delay reduction: current status. Journal of the Experimental Analysis of Behavior. 1993;60:159–169. doi: 10.1901/jeab.1993.60-159. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Farmer J, Schoenfeld WN. Varying temporal placement of an added stimulus in a fixed-interval schedule. Journal of the Experimental Analysis of Behavior. 1966;9:369–375. doi: 10.1901/jeab.1966.9-369. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Ferrandez AM, Hugueville L, Lehéricy S, Poline JB, Marsault C, Pouthas V. Basal ganglia and supplementary motor area subtend duration perception: an fMRI study. Neurolmage. 2003;19:1532–1544. doi: 10.1016/S1053-8119(03)00159-9. [DOI] [PubMed] [Google Scholar]
  63. Freestone DM, MacInnis MLM, Church RM. Response rates are governed more by time cues than contingency. Timing & Time Perception. 2013;1:3–20. doi: 10.1163/22134468-00002006. [DOI] [Google Scholar]
  64. Gaioni SJ. Blocking and nonsimultaneous compounds: Comparison of responding during compound conditioning and testing. Pavlovian Journal of Biological Science. 1982;17:16–29. doi: 10.1007/BF03003472. [DOI] [PubMed] [Google Scholar]
  65. Gallistel CR, Fairhurst S, Balsam P. The learning curve: implications of a quantitative analysis. Proceedings of the National Academy of Sciences. 2004;101:13124–13131. doi: 10.1073/pnas.0404965101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Gallistel CR, Gibbon J. Time, rate, and conditioning. Psychological Review. 2000;107:289–344. doi: 10.1037/0033-295X.107.2.289. [DOI] [PubMed] [Google Scholar]
  67. Galtress T, Garcia A, Kirkpatrick K. Individual differences in impulsive choice and timing in rats. Journal of the Experimental Analysis of Behavior. 2012;98:65–87. doi: 10.1901/jeab.2012.98-65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Galtress T, Marshall AT, Kirkpatrick K. Motivation and timing: clues for modeling the reward system. Behavioural Processes. 2012;90:142–153. doi: 10.1016/j.beproc.2012.02.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. García-Lecumberri C, Torres I, Martín S, Crespo JA, Miguéns M, Nicanor C, … Ambrosio E. Strain differences in the dose-response relationship for morphine self-administration and impulsive choice between Lewis and Fischer 344 rats. Journal of Psychopharmacology. 2010;25:783–791. doi: 10.1177/0269881110367444. [DOI] [PubMed] [Google Scholar]
  70. Garcia A, Kirkpatrick K. Impulsive choice behavior in four strains of rats: Evaluation of possible models of Attention-Deficit/Hyperactivity Disorder. Behavioural Brain Research. 2013;238:10–22. doi: 10.1016/j.bbr.2012.10.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Gibbon J. Scalar expectancy theory and Weber’s law in animal timing. Psychological Review. 1977;84:279–325. doi: 10.1037/0033-295X.84.3.279. [DOI] [Google Scholar]
  72. Gibbon J. Origins of scalar timing. Learning and Motivation. 1991;22:3–38. doi: 10.1016/0023-9690(91)90015-Z. [DOI] [Google Scholar]
  73. Gibbon J, Balsam PD. Spreading association in time. In: Locurto CM, Terrace HS, Gibbon J, editors. Autoshaping and conditioning theory. New York: Academic Press; 1981. pp. 219–253. [Google Scholar]
  74. Gibbon J, Church RM. Time left: linear versus logarithmic subjective time. Journal of Experimental Psychology: Animal Behavior Processes. 1981;7:87–108. doi: 10.1037/0097-7403.7.2.87. [DOI] [PubMed] [Google Scholar]
  75. Gibbon J, Malapani C, Dale CL, Gallistel CR. Toward a neurobiology of temporal cognition: advances and challenges. Current Opinion in Neurobiology. 1997;7:170–184. doi: 10.1016/S0959-4388(97)80005-0. [DOI] [PubMed] [Google Scholar]
  76. Goddard MJ, Jenkins HM. Blocking of a CS-US association by a US-US association. Journal of Experimental Psychology: Animal Behavior Processes. 1988;14:177–186. doi: 10.1037/0097-7403.14.2.177. [DOI] [PubMed] [Google Scholar]
  77. Gordon M. The assessment of impulsivity and mediating behavior in hyperactive and nonhyperactive boys. Journal of Abnormal Child Psychology. 1979;7:317–326. doi: 10.1007/BF00916541. [DOI] [PubMed] [Google Scholar]
  78. Grondin S. Timing and time perception: A review of recent behavioral and neuroscience findings and theoretical directions. Attention, Perception, & Psychophysics. 2010;72:561–582. doi: 10.3758/APP.72.3.561. [DOI] [PubMed] [Google Scholar]
  79. Guilhardi P, Keen R, MacInnis MLM, Church RM. How rats combine temporal cues. Behavioural Processes. 2005;69:189–205. doi: 10.1016/j.beproc.2005.02.004. [DOI] [PubMed] [Google Scholar]
  80. Hancock RA. Tests of the conditioned reinforcement value of sequential stimuli in pigeons. Animal Learning & Behavior. 1982;10:46–54. doi: 10.3758/BF03212045. [DOI] [Google Scholar]
  81. Harrington DL, Haaland KY, Knight RT. Cortical networks underlying mechanisms of time perception. The Journal of Neuroscience. 1998;18:1085–1095. doi: 10.1523/JNEUROSCI.18-03-01085.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Harrington DL, Lee RR, Boyd LA, Rapcsak SZ, Knight RT. Does the representation of time depend on the cerebellum?: Effect of cerebellar stroke. [Journal Article] Brain. 2004;127:561–574. doi: 10.1093/brain/awh065. [DOI] [PubMed] [Google Scholar]
  83. Harris JA, Gharaei S, Pincham HL. Response rates track the history of reinforcement times. Journal of Experimental Psychology: Animal Behavior Processes. 2011;37:277–286. doi: 10.1037/a0023079. [DOI] [PubMed] [Google Scholar]
  84. Hastings M, O’Neill JS, Maywood ES. Circadian clocks: regulators of endocrine and metabolic rhythms. Journal of Endocrinology. 2007;195:187–198. doi: 10.1677/JOE-07-0378. [DOI] [PubMed] [Google Scholar]
  85. Hawking SW. A brief history of time. New York: Bantam Books; 1988. [Google Scholar]
  86. Heerey EA, Robinson BM, McMahon RP, Gold JM. Delay discounting in schizophrenia. Cognitive Neuropsychiatry. 2007;12:213–221. doi: 10.1080/13546800601005900. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Hill JC, Covarrubias P, Terry J, Sanabria F. The effect of methylphenidate and rearing environment on behavioral inhibition in adult male rats. Psychopharmacology. 2012;219:353–362. doi: 10.1007/s00213-011-2552-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Holland PC. Trial and intertrial durations in appetitive conditioning in rats. Animal Learning & Behavior. 2000;28:121–135. doi: 10.3758/BF03200248. [DOI] [Google Scholar]
  89. Huskinson SL, Krebs CA, Anderson KG. Strain differences in delay discounting between Lewis and Fischer 344 rats at baseline and following acute and chronic administration of d-amphetamine. Pharmacology, Biochemistry and Behavior. 2012;101:403–416. doi: 10.1016/j.pbb.2012.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Ivry RB. The representation of temporal information in perception and motor control. Current Opinion in Neurobiology. 1996;6:851–857. doi: 10.1016/S0959-4388(96)80037-7. [DOI] [PubMed] [Google Scholar]
  91. Ivry RB, Keele SW. Timing functions of the cerebellum. Journal of Cognitive Neuroscience. 1989;1:136–152. doi: 10.1162/jocn.1989.1.2.136. [DOI] [PubMed] [Google Scholar]
  92. Jasselette P, Lejeune H, Wearden JH. The perching response and the laws of animal timing. Journal of Experimental Psychology: Animal Behavior Processes. 1990;16:150–161. doi: 10.1037/0097-7403.16.2.150. [DOI] [Google Scholar]
  93. Jennings DJ, Alonso E, Mondragón E, Bonardi C. Temporal Uncertainty During Overshadowing: A Temporal Difference Account. In: Alonso E, Mondragón E, editors. Computational Neuroscience for Advancing Artificial Intelligence: Models, Methods and Applications. Hershey, PA: Medical Information Science Reference; 2011. pp. 46–55. [Google Scholar]
  94. Jennings DJ, Bonardi C, Kirkpatrick K. Overshadowing and stimulus duration. Journal of Experimental Psychology: Animal Behavior Processes. 2007;33:464–475. doi: 10.1037/0097-7403.33.4.464. [DOI] [PubMed] [Google Scholar]
  95. Jennings DJ, Kirkpatrick K. Interval duration effects on blocking in appetitive conditioning. [Journal; Peer Reviewed Journal] Behavioural Processes. 2006;71:318–329. doi: 10.1016/j.beproc.2005.11.007. [DOI] [PubMed] [Google Scholar]
  96. Jentsch JD, Taylor JR. Impulsivity resulting from frontostriatal dysfunction in drug abuse: Implications for the control of behavior by reward-related stimuli. Psychopharmacology. 1999;146:373–390. doi: 10.1007/PL00005483. [DOI] [PubMed] [Google Scholar]
  97. Jimura K, Myerson J, Hilgard J, Keighley J, Braver TS, Green L. Domain independence and stability in young and older adults’ discounting of delayed rewards. Behavioural Processes. 2011;87:253–259. doi: 10.1016/j.beproc.2011.04.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Johnson MW, Bickel WK, Baker F. Moderate drug use and delay discounting: A comparison of heavy, light, and never smokers. Experimental and Clinical Psychopharmacology. 2007;15:187–194. doi: 10.1037/1064-1297.15.2.187. [DOI] [PubMed] [Google Scholar]
  99. Jozefowiez J, Staddon JER, Cerutti DT. The behavioral economics of choice and interval timing. Psychological Review. 2009;116:519–539. doi: 10.1037/a0016171. [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Kable JW, Glimcher PW. The neural correlates of subjective value during intertemporal choice. Nature Neuroscience. 2007;10:1625–1633. doi: 10.1038/nn2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Kable JW, Glimcher PW. The neurobiology of decision: consensus and controversy. Neuron. 2009;63:733–745. doi: 10.1016/j.neuron.2009.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Kalafut KL, Freestone DM, Maclnnis MLM, Church RM. Integrating timing and conditioning approaches to study behavior. Journal of Experimental Psychology: Animal Learning and Cognition. 2014;40:431–439. doi: 10.1037/xan0000037. [DOI] [PubMed] [Google Scholar]
  103. Kamin LJ. “Attention-like” processes in classical conditioning. In: Jones MR, editor. Miami symposium on the prediction of behavior: Aversive stimulation. Coral Gables, FL: University of Miami Press; 1968. pp. 9–33. [Google Scholar]
  104. Kamin LJ. Predictability, surprise, attention, and conditioning. In: Campbell BA, Church RM, editors. Punishment and aversive behavior. New York: Appleton-Century-Crofts; 1969. pp. 276–296. [Google Scholar]
  105. Kehoe EJ. CS-US contiguity and CS intensity in conditioning of the rabbit’s nictitating membrane response to serial compound stimuli. J Exp Psychol Anim Behav Process. 1983;9:307–319. doi: 10.1037/0097-7403.9.3.307. [DOI] [PubMed] [Google Scholar]
  106. Kehoe EJ, Ludvig EA, Dudeney JE, Neufeld J, Sutton RS. Magnitude and timing of nictitating membrane movements during classical conditioning of the rabbit (Oryctolagus cuniculus) Behavioral Neuroscience. 2008;122:471–476. doi: 10.1037/0735-7044.122.2.471. [DOI] [PubMed] [Google Scholar]
  107. Kehoe EJ, Schreurs BG, Amodei N. Blocking acquisition of the rabbit’s nictitating membrane response to serial conditioned stimuli. Learning and Motivation. 1981;12:92–108. doi: 10.1016/0023-9690(81)90026-6. [DOI] [Google Scholar]
  108. Kehoe EJ, Schreurs BG, Graham P. Temporal primacy overrides prior training in serial compound conditioning of the rabbit’s nictitating membrane response. Animal Learning & Behavior. 1987;15:455–464. doi: 10.3758/BF03205056. [DOI] [Google Scholar]
  109. Killeen PR, Fetterman JG. A behavioral theory of timing. Psychological Review. 1988;95:274–295. doi: 10.1037/0033-295X.95.2.274. [DOI] [PubMed] [Google Scholar]
  110. Kirby KN. One-year temporal stability of delay-discount rates. Psychonomic Bulletin & Review. 2009;16:457–462. doi: 10.3758/PBR.16.3.457. [DOI] [PubMed] [Google Scholar]
  111. Kirkpatrick-Steger K, Miller SS, Betti CA, Wasserman EA. Cyclic responding by pigeons on the peak timing procedure. Journal of Experimental Psychology: Animal Behavior Processes. 1996;22:447–460. doi: 10.1037/0097-7403.22.4.447. [DOI] [PubMed] [Google Scholar]
  112. Kirkpatrick K. Packet theory of conditioning and timing. Behavioural Processes. 2002;57:89–106. doi: 10.1016/S0376-6357(02)00007-4. [DOI] [PubMed] [Google Scholar]
  113. Kirkpatrick K. Interactions of timing and prediction error learning. Behavioural Processes. 2014:135–145. doi: 10.1016/j.beproc.2013.08.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Kirkpatrick K, Church RM. Are separate theories of conditioning and timing necessary? Behavioural Processes. 1998;44:163–182. doi: 10.1016/S0376-6357(98)00047-3. [DOI] [PubMed] [Google Scholar]
  115. Kirkpatrick K, Church RM. Independent effects of stimulus and cycle duration in conditioning: The role of timing processes. Animal Learning & Behavior. 2000a;28:373–388. doi: 10.3758/BF03200271. [DOI] [Google Scholar]
  116. Kirkpatrick K, Church RM. Stimulus and temporal cues in classical conditioning. Journal of Experimental Psychology: Animal Behavior Processes. 2000b;26:206–219. doi: 10.1037/0097-7403.26.2.206. [DOI] [PubMed] [Google Scholar]
  117. Kirkpatrick K, Church RM. Tracking of the expected time to reinforcement in temporal conditioning procedures. Learning & Behavior. 2003;31:3–21. doi: 10.3758/BF03195967. [DOI] [PubMed] [Google Scholar]
  118. Kirkpatrick K, Church RM. Temporal learning in random control procedures. Journal of Experimental Psychology: Animal Behavior Processes. 2004;30:213–228. doi: 10.1037/0097-7403.30.3.213. [DOI] [PubMed] [Google Scholar]
  119. Kohler EA, Ayres JJB. The Kamin blocking effect with variable-duration CSs. Animal Learning & Behavior. 1979;7:347–350. doi: 10.3758/BF03209681. [DOI] [Google Scholar]
  120. Kohler EA, Ayres JJB. Blocking with serial and simultaneous compounds in a trace conditioning procedure. Animal Learning & Behavior. 1982;10:277–287. doi: 10.3758/BF03213711. [DOI] [Google Scholar]
  121. Kramer TJ, Rilling M. Differential reinforcement of low rates: a selective critique. Psychological Bulletin. 1970;74:225–254. doi: 10.1037/h0029813. [DOI] [Google Scholar]
  122. Lattal KM. Trial and intertrial durations in Pavlovian conditioning: Issues of learning and performance. Journal of Experimental Psychology: Animal Behavior Processes. 1999;25:433–450. doi: 10.1037/0097-7403.25.4.433. [DOI] [PubMed] [Google Scholar]
  123. Laude J, Stagner J, Rayburn-Reeves R, Zentall T. Midsession reversals with pigeons: visual versus spatial discriminations and the intertrial interval. Learning & Behavior. 2014;42:40–46. doi: 10.3758/s13420-013-0122-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  124. Leak TM, Gibbon J. Simultaneous timing of multiple intervals: implications of the scalar property. Journal of Experimental Psychology: Animal Behavior Processes. 1995;21:3–19. doi: 10.1037/0097-7403.21.1.3. [DOI] [PubMed] [Google Scholar]
  125. Lejeune H. Switching or gating? The attentional challenge in cognitive models of psychological time. Behavioural Processes. 1998;44:127–145. doi: 10.1016/S0376-6357(98)00045-X. [DOI] [PubMed] [Google Scholar]
  126. Lejeune H, Wearden JH. The comparative psychology of fixed-interval responding: some quantitative analyses. Learning and Motivation. 1991;22:84–111. doi: 10.1016/0023-9690(91)90018-4. [DOI] [Google Scholar]
  127. Leon MI, Shadlen MN. Representation of time by neurons in the posterior parietal cortex of the macaque. Neuron. 2003;38:317–327. doi: 10.1016/S0896-6273(03)00185-5. [DOI] [PubMed] [Google Scholar]
  128. Lewis PA, Miall RC. Distinct systems for automatic and cognitively controlled time measurement: evidence from neuroimaging. Current Opinion in Neurobiology. 2003;13:250–255. doi: 10.1016/S0959-4388(03)00036-9. [DOI] [PubMed] [Google Scholar]
  129. Lustig C, Matell MS, Meck WH. Not “just” a coincidence: Frontal-striatal interactions in working memory and interval timing. Memory. 2005;13:441–448. doi: 10.1080/09658210344000404. [DOI] [PubMed] [Google Scholar]
  130. Machado A. Learning the temporal dynamics of behavior. Psychological Review. 1997;104:241–265. doi: 10.1037/0033-295X.104.2.241. [DOI] [PubMed] [Google Scholar]
  131. Maclnnis MLM. Do rats time filled and empty intervals of equal duration differently? Behavioural Processes. 2007;75:182–187. doi: 10.1016/j.beproc.2007.02.006. [DOI] [PubMed] [Google Scholar]
  132. Maclnnis MLM, Marshall AT, Freestone DM, Church RM. A simultaneous temporal processing account of response rate. Behavioural Processes. 2010;84:506–510. doi: 10.1016/j.beproc.2009.12.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  133. Madden GJ, Smith NG, Brewer AT, Pinkston JW, Johnson PS. Steady-state assessment of impulsive choice in Lewis and Fischer 344 rats: Between-condition delay manipulations. Journal of the Experimental Analysis of Behavior. 2008;90:333–344. doi: 10.1901/jeab.2008.90-333. [DOI] [PMC free article] [PubMed] [Google Scholar]
  134. Maleske RT, Frey PW. Blocking in eyelid conditioning: Effect of changing the CS-US interval and introducing an intertrial stimulus. Animal Learning & Behavior. 1979;7:452–456. doi: 10.3758/BF03209700. [DOI] [Google Scholar]
  135. Marshall AT, Smith AP, Kirkpatrick K. Mechanisms of impulsive choice: I. Individual differences in interval timing and reward processing. Journal of the Experimental Analysis of Behavior. 2014;102:86–101. doi: 10.1002/jeab.88. [DOI] [PMC free article] [PubMed] [Google Scholar]
  136. Matell MS, Meck WH. Cortico-striatal circuits and interval timing: coincidence detection of oscillatory processes. Cognitive Brain Research. 2004;21:139–170. doi: 10.1016/j.cogbrainres.2004.06.012. [DOI] [PubMed] [Google Scholar]
  137. Matell MS, Shea-Brown E, Gooch C, Wilson AG, Rinzel J. A heterogeneous population code for elapsed time in rat medial agranular cortex. Behavioral Neuroscience. 2011;125:54–73. doi: 10.1037/a0021954. [DOI] [PMC free article] [PubMed] [Google Scholar]
  138. Matthews BA, Shimoff E, Catania AC, Sagvolden T. Uninstructed human responding: sensitivity to ratio and interval contingencies. Journal of the Experimental Analysis of Behavior. 1977;27:453–467. doi: 10.1901/jeab.1977.27-453. [DOI] [PMC free article] [PubMed] [Google Scholar]
  139. Matusiewicz AK, Carter AE, Landes RD, Yi R. Statistical equivalence and test-retest reliability of delay and probability discounting using real and hypothetical rewards. Behavioural Processes. 2013;100:116–122. doi: 10.1016/j.beproc.2013.07.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  140. Mauk MD, Buonomano DV. The neural basis of temporal processing. Annual Review of Neuroscience. 2004;27:307–340. doi: 10.1146/annurev.neuro.27.070203.144247. [DOI] [PubMed] [Google Scholar]
  141. Mazur JE. An adjusting procedure for studying delayed reinforcement. In: Commons ML, Mazur JE, Nevin JA, Rachlin H, editors. Quantitative analyses of behavior. Vol. 5. The effect of delay and of intervening events on reinforcer value. Hillsdale, NJ: Erlbaum; 1987. pp. 55–73. [Google Scholar]
  142. McClure J, Podos J, Richardson HN. Isolating the delay component of impulsive choice in adolescent rats. Frontiers in Integrative Neuroscience. 2014;8:1–9. doi: 10.3389/fnint.2014.00003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  143. McGuire JT, Kable JW. Decision makers calibrate behavioral persistence on the basis of time-interval experience. Cognition. 2012;124:216–226. doi: 10.1016/j.cognition.2012.03.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  144. McGuire JT, Kable JW. Rational temporal predictions can underlie apparent failures to delay gratification. Psychological Review. 2013;120:395–410. doi: 10.1037/a0031910. [DOI] [PMC free article] [PubMed] [Google Scholar]
  145. McMillan N, Roberts WA. The effects of cue competition on timing in pigeons. Behavioural Processes. 2010;84:581–590. doi: 10.1016/j.beproc.2010.02.018. [DOI] [PubMed] [Google Scholar]
  146. McMillan N, Roberts WA. Pigeons make errors as a result of interval timing in a visual, but not a visual-spatial, midsession reversal task. Journal of Experimental Psychology: Animal Behavior Processes. 2012;38:440–445. doi: 10.1037/a0030192. [DOI] [PubMed] [Google Scholar]
  147. McMillan N, Roberts WA. A three-stimulus midsession reversal task in pigeons with visual and spatial discriminative stimuli. Animal Cognition. 2015;18:373–383. doi: 10.1007/s10071-014-0808-2. [DOI] [PubMed] [Google Scholar]
  148. Mechner F, Guevrekian L. Effects of deprivation upon counting and timing in rats. Journal of the experimental analysis of behavior. 1962;5:463–466. doi: 10.1901/jeab.1962.5-463. [DOI] [PMC free article] [PubMed] [Google Scholar]
  149. Meck WH, Benson AM. Dissecting the brain’s internal clock: How frontal-striatal circuitry keeps times and shifts attention. Brain and Cognition. 2002;48:195–211. doi: 10.1006/brcg.2001.1313. [DOI] [PubMed] [Google Scholar]
  150. Meck WH, Church RM. A mode control model of counting and timing processes. Journal of Experimental Psychology: Animal Behavior Processes. 1983;9:320–334. doi: 10.1037/0097-7403.9.3.320. [DOI] [PubMed] [Google Scholar]
  151. Meck WH, Church RM. Simultaneous temporal processing. Journal of Experimental Psychology: Animal Behavior Processes. 1984;10:1–29. doi: 10.1037/0097-7403.10.1.1. [DOI] [PubMed] [Google Scholar]
  152. Miller EK, Cohen JD. An integrative theory of prefrontal cortex function. Annual Review of Neuroscience. 2001;24:167–202. doi: 10.1146/annurev.neuro.24.1.167. [DOI] [PubMed] [Google Scholar]
  153. Morillon B, Kell CA, Giraud AL. Three stages and four neural systems in time estimation. The Journal of Neuroscience. 2009;29:14803–14811. doi: 10.1523/JNEUROSCI.3222-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  154. Myerson J, Green L. Discounting of delayed rewards: models of individual choice. Journal of the Experimental Analysis of Behavior. 1995;64:263–276. doi: 10.1901/jeab.1995.64-263. [DOI] [PMC free article] [PubMed] [Google Scholar]
  155. O’Donnell JM, Seiden LS. Differential-reinforcement-of-low-rate 72-second schedule: Selective effects of antidepressant drugs. Journal of Pharmacology and Experimental Therapeutics. 1983;224:80–88. [PubMed] [Google Scholar]
  156. Ohmura Y, Takahashi T, Kitamura N, Wehr P. Three-month stability of delay and probability discounting measures. Experimental and Clinical Psychopharmacology. 2006;14:318–328. doi: 10.1037/1064-1297.14.3.318. [DOI] [PubMed] [Google Scholar]
  157. Ohyama T, Mauk MD. Latent acquisition of timed responses in cerebellar cortex. The Journal of Neuroscience. 2001;21:682–690. doi: 10.1523/JNEUROSCI.21-02-00682.2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  158. Pavlov IP. Conditioned reflexes. New York: Dover; 1927. [Google Scholar]
  159. Penney TB, Meck WH, Roberts SA, Gibbon J, Erlenmeyer-Kimling L. Interval-timing deficits in individuals at high risk for schizophrenia. Brain and Cognition. 2005;58:109–118. doi: 10.1016/j.bandc.2004.09.012. [DOI] [PubMed] [Google Scholar]
  160. Peters J, Büchel C. Overlapping and distinct neural systems code for subjective value during intertemporal and risky decision making. The Journal of Neuroscience. 2009;29:15727–15734. doi: 10.1523/JNEUROSCI.3489-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  161. Peterson JD, Wolf ME, White F. Impaired DRL 30 performance during amphetamine withdrawal. Behavioural Brain Research. 2003;143:101–108. doi: 10.1016/S0166-4328(03)00035-4. [DOI] [PubMed] [Google Scholar]
  162. Pizzo MJ, Kirkpatrick K, Blundell PJ. The effect of changes in criterion value on differential reinforcement of low rate schedule performance. Journal of the Experimental Analysis of Behavior. 2009;92:181–198. doi: 10.1901/jeab.2009.92-181. [DOI] [PMC free article] [PubMed] [Google Scholar]
  163. Plowright CMS. Simultaneous processing of short delays and higher order temporal intervals within a session by pigeons. Behavioural Processes. 1996;38:1–9. doi: 10.1016/0376-6357(96)00009-5. [DOI] [PubMed] [Google Scholar]
  164. Rakitin BC, Gibbon J, Penney TB, Malapani C, Hinton SC, Meck WH. Scalar expectancy and peak-interval timing in humans. Journal of Experimental Psychology: Animal Behavior Processes. 1998;24:15–33. doi: 10.1037/0097-7403.24.1.15. [DOI] [PubMed] [Google Scholar]
  165. Rao SM, Mayer AR, Harrington DL. The evolution of brain activation during temporal processing. Nat Neurosci. 2001;4:317–323. doi: 10.1038/85191. [DOI] [PubMed] [Google Scholar]
  166. Rayburn-Reeves RM, Laude JR, Zentall TR. Pigeons show near-optimal win-stay/lose-shift performance on a simultaneous-discrimination, midsession reversal task with short intertrial intervals. Behavioural Processes. 2013;92:65–70. doi: 10.1016/j.beproc.2012.10.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  167. Rayburn-Reeves RM, Stagner JP, Kirk CR, Zentall TR. Reversal learning in rats (Rattus norvegicus) and pigeons (Columba livia): qualitative differences in behavioral flexibility. Journal of Comparative Psychology. 2013;127:202–211. doi: 10.1037/a0026311. [DOI] [PubMed] [Google Scholar]
  168. Reppert SM, Weaver DR. Coordination of circadian timing in mammals. Nature. 2002;418:935–941. doi: 10.1038/nature00965. [DOI] [PubMed] [Google Scholar]
  169. Rescorla RA. Probability of shock in the presence and absence of CS in fear conditioning. Journal of Comparative and Physiological Psychology. 1968;66:1–5. doi: 10.1037/h0025984. [DOI] [PubMed] [Google Scholar]
  170. Richards JB, Sabol KE, Seiden LS. DRL interresponse-time distributions: Quantification by peak deviation analysis. Journal of the Experimental Analysis of Behavior. 1993;60:361–385. doi: 10.1901/jeab.1993.60-361. [DOI] [PMC free article] [PubMed] [Google Scholar]
  171. Richards JB, Seiden LS. A quantitative interresponse-time analysis of DRL performance differentiates similar effects of the antidepressant despiramine and the novel anxiolytic gepirone. Journal of the Experimental Analysis of Behavior. 1991;56:173–192. doi: 10.1901/jeab.1991.56-173. [DOI] [PMC free article] [PubMed] [Google Scholar]
  172. Richardson WK, Loughead TE. Behavior under large values of the differential-reinforcement-of-low-rate schedule. Journal of the experimental analysis of behavior. 1974;22:121–129. doi: 10.1901/jeab.1974.22-121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  173. Roberts S. Isolation of an internal clock. Journal of Experimental Psychology: Animal Behavior Processes. 1981;7:242–268. doi: 10.1037/0097-7403.7.3.242. [DOI] [PubMed] [Google Scholar]
  174. Roberts WA, Coughlin R, Roberts S. Pigeons Flexibly Time or Count on Cue. Psychological Science. 2000;11:218–222. doi: 10.1111/1467-9280.00244. [DOI] [PubMed] [Google Scholar]
  175. Rodriguez ML, Logue AW. Adjusting delay to reinforcement: Comparing choice in pigeons and humans. Journal of Experimental Psychology: Animal Behavior Processes. 1988;14:105–117. doi: 10.1037/0097-7403.14.1.105. [DOI] [PubMed] [Google Scholar]
  176. Sanabria F, Killeen PR. Evidence for impulsivity in the Spontaneously Hypertensive Rat drawn from complementary response-withholding tasks. Behavioral and Brain Functions. 2008;4:1–17. doi: 10.1186/1744-9081-4-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  177. Savastano HI, Miller RR. Time as content in Pavlovian conditioning. Behavioural Processes. 1998;44:147–162. doi: 10.1016/S0376-6357(98)00046-1. [DOI] [PubMed] [Google Scholar]
  178. Schreurs BG, Westbrook RF. The effects of changes in the CS-US interval during compound conditioning upon an otherwise blocked element. The Quarterly Journal of Experimental Psychology B: Comparative and Physiological Psychology. 1982;34:19–30. doi: 10.1080/14640748208400887. [DOI] [PubMed] [Google Scholar]
  179. Schubotz RI, Friederici AD, Yves von Cramon D. Time perception and motor timing: a common cortical and subcortical basis revealed by fMRI. NeuroImage. 2000;11:1–12. doi: 10.1006/nimg.1999.0514. [DOI] [PubMed] [Google Scholar]
  180. Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275:1593–1599. doi: 10.1126/science.275.5306.1593. [DOI] [PubMed] [Google Scholar]
  181. Schwartze M, Rothermich K, Kotz SA. Functional dissociation of pre-SMA and SMA-proper in temporal processing. NeuroImage. 2012;60:290–298. doi: 10.1016/j.neuroimage.2011.11.089. [DOI] [PubMed] [Google Scholar]
  182. Shannon CE. A mathematical theory of communication. The Bell System Technical Journal. 1948;27:623–656. doi: 10.1002/j.1538-7305.1948.tb00917.x. [DOI] [Google Scholar]
  183. Shull RL, Gaynor ST, Grimes JA. Response rate viewed as engagement bouts: effects of relative reinforcement and schedule type. Journal of the Experimental Analysis of Behavior. 2001;75:247–274. doi: 10.1901/jeab.2001.75-247. [DOI] [PMC free article] [PubMed] [Google Scholar]
  184. Shull RL, Gaynor ST, Grimes JA. Response rate viewed as engagement bouts: resistance to extinction. Journal of the Experimental Analysis of Behavior. 2002;77:211–231. doi: 10.1901/jeab.2002.77-211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  185. Shull RL, Grimes JA. Bouts of responding from variable-interval reinforcement of lever pressing by rats. Journal of the Experimental Analysis of Behavior. 2003;80:159–171. doi: 10.1901/jeab.2003.80-159. [DOI] [PMC free article] [PubMed] [Google Scholar]
  186. Shull RL, Grimes JA, Bennett JA. Bouts of responding: the relation between bout rate and the rate of variable-interval reinforcement. Journal of the Experimental Analysis of Behavior. 2004;81:65–83. doi: 10.1901/jeab.2004.81-65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  187. Simen P, Balcı F, deSouza L, Cohen JD, Holmes P. A model of interval timing by neural integration. The Journal of Neuroscience. 2011;31:9238–9253. doi: 10.1523/JNEUROSCI.3121-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  188. Smith AP, Marshall AT, Kirkpatrick K. Mechanisms of impulsive choice: II. Time-based interventions to improve self-control. Behavioural Processes. 2015;112:29–42. doi: 10.1016/j.beproc.2014.10.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  189. Snapper AG, Kadden RM, Shimoff EH, Schoenfeld WN. Stimulus intrustion on fixed-interval responding in the rat: the effects of electric shock intensity, temporal location, and response contingency. Learning and Motivation. 1975;6:367–384. doi: 10.1016/0023-9690(75)90016-8. [DOI] [Google Scholar]
  190. Soffié M, Lejeune H. Acquisition and long-term retention of a 2-lever DRL schedule: comparison between mature and aged rats. Neurobiology of Aging. 1991;12:25–30. doi: 10.1016/0197-4580(91)90035-I. [DOI] [PubMed] [Google Scholar]
  191. Soffié M, Lejeune H. Cholinergic blockade and response timing in rats. Psychopharmacology. 1992;106:215–220. doi: 10.1007/BF02801975. [DOI] [PubMed] [Google Scholar]
  192. Solanto MV. Dopamine dysfunction in AD/HD: integrating clinical and basic neuroscience research. Behavioural Brain Research. 2002;130:65–71. doi: 10.1016/S0166-4328(01)00431-4. [DOI] [PubMed] [Google Scholar]
  193. Staddon JER, Higa JJ. Time and memory: towards a pacemaker-free theory of interval timing. Journal of the Experimental Analysis of Behavior. 1999;71:215–251. doi: 10.1901/jeab.1999.71-215. [DOI] [PMC free article] [PubMed] [Google Scholar]
  194. Stagner J, Michler D, Rayburn-Reeves R, Laude J, Zentall T. Midsession reversal learning: why do pigeons anticipate and perseverate? Learning & Behavior. 2013;41:54–60. doi: 10.3758/s13420-012-0077-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  195. Stein JS, Pinkston JW, Brewer AT, Francisco MT, Madden GJ. Delay discounting in Lewis and Fischer 344 rats: steady-state and rapid-determination adjusting-amount procedures. Journal of the Experimental Analysis of Behavior. 2012;97:305–321. doi: 10.1901/jeab.2012.97-305. [DOI] [PMC free article] [PubMed] [Google Scholar]
  196. Stevens MC, Kiehl KA, Pearlson G, Calhoun VD. Functional neural circuits for mental timekeeping. Human Brain Mapping. 2007;28:394–408. doi: 10.1002/hbm.20285. [DOI] [PMC free article] [PubMed] [Google Scholar]
  197. Takahashi T. Loss of self-control in intertemporal choice may be attributable to logarithmic time-perception. Medical Hypotheses. 2005;65:691–693. doi: 10.1016/j.mehy.2005.04.040. [DOI] [PubMed] [Google Scholar]
  198. Takahashi T, Han R, Nakamura F. Time discounting: psychophysics of intertemporal and probabilistic choices. Journal of Behavioral Economics and Finance. 2012;5:10–14. doi: 10.11167/jbef.5.10. [DOI] [Google Scholar]
  199. Takahashi T, Oono H, Radford MHB. Psychophysics of time perception and intertemporal choice models. Physica A: Statistical and Theoretical Physics (Amsterdam) 2008;387:2066–2074. doi: 10.1016/j.physa.2007.11.047. [DOI] [Google Scholar]
  200. Tallal P, Miller S, Fitch RH. Neurobiological basis of speech: a case for the preeminence of temporal processing. Annals of the New York Academy of Sciences. 1993;682:27–47. doi: 10.1111/j.1749-6632.1993.tb22957.x. [DOI] [PubMed] [Google Scholar]
  201. Toplak ME, Dockstader C, Tannock R. Temporal information processing in ADHD: findings to date and new methods. Journal of Neuroscience Methods. 2006;151:15–29. doi: 10.1016/j.jneumeth.2005.09.018. [DOI] [PubMed] [Google Scholar]
  202. Tregellas JR, Davalos DB, Rojas DC. Effect of task difficulty on the functional anatomy of temporal processing. NeuroImage. 2006;32:307–315. doi: 10.1016/j.neuroimage.2006.02.036. [DOI] [PubMed] [Google Scholar]
  203. Waelti P, Dickinson A, Schultz W. Dopamine responses comply with basic assumptions of formal learning theory. Nature. 2001;412:43–48. doi: 10.1038/35083500. [DOI] [PubMed] [Google Scholar]
  204. Ward RD, Kellendonk C, Kandel ER, Balsam PD. Timing as a window on cognition in schizophrenia. Neuropharmacology. 2012;62:1175–1181. doi: 10.1016/j.neuropharm.2011.04.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  205. Wearden JH. Maximizing reinforcement rate on spaced-responding schedules under conditions of temporal uncertainty. Behavioural Processes. 1990;22:47–59. doi: 10.1016/0376-6357(90)90007-3. [DOI] [PubMed] [Google Scholar]
  206. Williams DA, Lawson C, Cook R, Mather AA, Johns KW. Timed excitatory conditioning under zero and negative contingencies. Journal of Experimental Psychology: Animal Behavior Processes. 2008;34:94–105. doi: 10.1037/0097-7403.34.1.94. [DOI] [PubMed] [Google Scholar]
  207. Williams DA, LoLordo VM. Time cues block the CS, but the CS does not block time cues. Quarterly Journal of Experimental Psychology B. 1995;48:97–116. doi: 10.1080/14640749508401441. [DOI] [PubMed] [Google Scholar]
  208. Wilson AG, Crystal JD. Prospective memory in the rat. Animal Cognition. 2012;15:349–358. doi: 10.1007/s10071-011-0459-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  209. Wilson AG, Pizzo MJ, Crystal JD. Event-based prospective memory in the rat. Current Biology. 2013;23:1089–1093. doi: 10.1016/j.cub.2013.04.067. [DOI] [PMC free article] [PubMed] [Google Scholar]
  210. Wittmann M, Leland DS, Churan J, Paulus MP. Impaired time perception and motor timing in stimulant-dependent subjects. Drug and Alcohol Dependence. 2007;90:183–192. doi: 10.1016/j.drugalcdep.2007.03.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  211. Wittmann M, Paulus MP. Decision making, impulsivity and time perception. Trends in Cognitive Sciences. 2008;12:7–12. doi: 10.1016/j.tics.2007.10.004. [DOI] [PubMed] [Google Scholar]
  212. Wynne CDL, Staddon JER. Typical delay determines waiting time on periodic-food schedules: static and dynamic tests. Journal of the Experimental Analysis of Behavior. 1988;50:197–210. doi: 10.1901/jeab.1988.50-197. [DOI] [PMC free article] [PubMed] [Google Scholar]
  213. Wynne CDL, Staddon JER. Waiting in pigeons: the effects of daily intercalation on temporal discrimination. Journal of the Experimental Analysis of Behavior. 1992;58:47–66. doi: 10.1901/jeab.1992.58-47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  214. Yi L. Do rats represent time logarithmically or linearly? Behavioural Processes. 2009;81:274–279. doi: 10.1016/j.beproc.2008.10.004. [DOI] [PubMed] [Google Scholar]
  215. Zakay D, Block RA. The role of attention in time estimation processes. In: Pastor MA, Artieda J, editors. Time, internal clocks and movement. New York: Elsevier; 1996. pp. 143–164. [Google Scholar]
  216. Zakay D, Block RA. Temporal cognition. Current Directions in Psychological Science. 1997;6:12–16. doi: 10.1111/1467-8721.ep11512604. [DOI] [Google Scholar]
  217. Zakay D, Block RA. Prospective and retrospective duration judgments: an executive-control perspective. Acta Neurobiologiae Experimentalis. 2004;64:319. doi: 10.3758/bf03209393. [DOI] [PubMed] [Google Scholar]

RESOURCES