Skip to main content
Springer logoLink to Springer
. 2016 Nov 30;17(1):24–76. doi: 10.3758/s13415-016-0463-y

A neural model of normal and abnormal learning and memory consolidation: adaptively timed conditioning, hippocampus, amnesia, neurotrophins, and consciousness

Daniel J Franklin 1, Stephen Grossberg 1,
PMCID: PMC5272895  PMID: 27905080

Abstract

How do the hippocampus and amygdala interact with thalamocortical systems to regulate cognitive and cognitive-emotional learning? Why do lesions of thalamus, amygdala, hippocampus, and cortex have differential effects depending on the phase of learning when they occur? In particular, why is the hippocampus typically needed for trace conditioning, but not delay conditioning, and what do the exceptions reveal? Why do amygdala lesions made before or immediately after training decelerate conditioning while those made later do not? Why do thalamic or sensory cortical lesions degrade trace conditioning more than delay conditioning? Why do hippocampal lesions during trace conditioning experiments degrade recent but not temporally remote learning? Why do orbitofrontal cortical lesions degrade temporally remote but not recent or post-lesion learning? How is temporally graded amnesia caused by ablation of prefrontal cortex after memory consolidation? How are attention and consciousness linked during conditioning? How do neurotrophins, notably brain-derived neurotrophic factor (BDNF), influence memory formation and consolidation? Is there a common output path for learned performance? A neural model proposes a unified answer to these questions that overcome problems of alternative memory models.

Keywords: Cognitive-emotional learning, Conditioning, Memory consolidation, Amnesia, Hippocampus, Amygdala, Pontine nuclei, Adaptive timing, Time cells, BDNF

Overview and scope

The roles and interactions of amygdala, hippocampus, thalamus, and neocortex in cognitive and cognitive-emotional learning, memory, and consciousness have been extensively investigated through experimental and clinical studies (Berger & Thompson, 1978; Clark, Manns, & Squire, 2001; Frankland & Bontempi, 2005; Kim, Clark, & Thompson, 1995; Lee & Kim, 2004 ; Mauk & Thompson 1987; Moustafa et al., 2013; Port, Romano, Steinmetz, Mikhail, & Patterson, 1986; Powell & Churchwell, 2002; Smith, 1968; Takehara, Kawahara, & Krino, 2003). This article develops a neural model aimed at providing a unified explanation of challenging data about how these brain regions interact during normal learning, and how lesions may cause specific learning and behavioral deficits, including amnesia. The model also proposes testable predictions to further test its explanations. The most relevant experiments use the paradigm of classical conditioning, notably delay conditioning and trace conditioning during the eyeblink conditioning task that is often used to explicate basic properties of associative learning. Earlier versions of this work were briefly presented in Franklin and Grossberg (2005, 2008).

Eyeblink conditioning has been extensively studied because it has disclosed behavioral, neurophysiological, and anatomical information about the learning and memory processes related to adaptively timed, conditioned responses to aversive stimuli, as measured by eyelid movements in mice (Chen et al., 1995), rats (Clark, Broadbent, Zola, & Squire, 2002; Neufeld & Mintz, 2001; Schmajuk, Lam, & Christiansen, 1994), monkeys (Clark & Zola, 1998), and humans (Clark, Manns, & Squire, 2001; Solomon et al., 1990), and by the timing and amplitude of the nictitating membrane reflex (NMR) which involves a nictitating membrane that covers the eye like an eyelid in cats (Norman et al., 1974), rabbits (Berger & Thompson, 1978; Christian & Thompson, 2003; McLaughlin, Skaggs, Churchwell, & Powell, 2002; Port, Mikhail, & Patterson, 1985; Port et al., 1986; Powell & Churchill 2002; Powell, Skaggs, Churchwell, & McLauglin, 2001; Solomon et al., 1990), and other animals. Eyeblink/NMR conditioning data will herein be used to help formulate and answer basic questions about associative learning, adaptive timing, and memory consolidation.

Classical conditioning involves learning associations between objects or events. Eyeblink conditioning associates a neutral event, such as a tone or a light, called the conditioned stimulus (CS), with an emotionally-charged, reflex-inducing event, such as a puff of air to the eye or a shock to the periorbital area, called the unconditioned stimulus (US). Delay conditioning occurs when the stimulus events temporally overlap so that the subject learns to make a conditioned response (CR) in anticipation of the US (Fig. 1). Trace conditioning involves a temporal gap between CS offset and US onset such that a CS-activated memory trace is required during the inter-stimulus interval (ISI) in order to establish an adaptively timed association between CS and US that leads to a successful CR (Pavlov, 1927).

Fig. 1.

Fig. 1

Eyeblink conditioning associates a neutral event, called the conditioned stimulus (CS), with an emotionally-charged, reflex-inducing event, called the unconditioned stimulus (US). Delay conditioning occurs when the stimulus events temporally overlap. Trace conditioning involves a temporal gap between CS offset and US onset such that a CS-activated memory trace is required during the inter-stimulus interval (ISI) in order to establish an association between CS and US. After either normal delay and trace conditioning, with a range of stimulus durations and ISIs a conditioned response (CR) is performed in anticipation of the US

Multiple brain areas are involved in eyeblink conditioning. Many of these regions, and their interactions, are simulated in the current neural model (Fig. 2). Sensory input comes into the cortex, and the model, by way of the thalamus. Since the US is an aversive stimulus, the amygdala is involved (Büchel, Dolan, Armony, & Friston, 1999; Lee & Kim, 2004). The hippocampus plays a role in new learning, in general (Frankland & Bontempi, 2005; Kim, Clark, & Thompson, 1995; Takehara et al., 2003) and in adaptively timed learning, in particular (Büchel et al., 1999; Green & Woodruff-Pak, 2000; Kaneko & Thompson, 1997; Port et al., 1986; Smith, 1968). The prefrontal cortex plays an essential role in the consolidation of long-term memory (Frankland & Bontempi, 2005; Takehara, Kawahara, & Krino, 2003; Winocur, Moscovitch, & Bontempi, 2010). Lesions of the amygdala, hippocampus, thalamus, and neocortex have different effects depending on the phase of learning when they occur.

Fig. 2.

Fig. 2

The neurotrophic START, or nSTART, macrocircuit is formed from parallel and interconencted networks that support both delay and trace conditioing. Connectivity between thalamus and sensory cortex includes pathways from the amygdala and hippocampus, as does connectivity between sensory cortex and prefrontal cortex, specifically orbitofrontal cortex. These circuits are homologous. Hence the current model lumps the thalamus and sensory cortex together and simulates only sensory cortical dynamics. Multiple types of learning and neurotrophic mechanisms of memory consolidation cooperate in these circuits to generate adaptively timed responses. Connections from sensory cortex to orbitofrontal cortex support category learning. Reciprocal connections from orbitofrontal cortex to sensory cortex support attention. Habituative transmitter gates modulate excitatory conductances at all processing stages. Connections from sensory cortex to amygdala connections support conditioned reinforcer learning. Connections from amygdala to orbitofrontal cortex support incentive motivation learning. Hippocampal adaptive timing and brain-derived neurotrophic factor (BDNF) bridge temporal delays between conditioned stimulus (CS) offset and unconditioned stimulus (US) onset during trace conditioning acquisition. BDNF also supports long-term memory consolidation within sensory cortex to hippocampal pathways and from hippocampal to orbitofrontal pathways. The pontine nuclei serve as a final common pathway for reading-out conditioned responses. Cerebellar dynamics are not simulated in nSTART. Key: arrowhead = excitatory synapse; hemidisc = adaptive weight; square = habituative transmitter gate; square followed by a hemidisc = habituative transmitter gate followed by an adaptive weight

In particular, the model clarifies why the hippocampus is needed for trace conditioning, but not delay conditioning (Büchel et al., 1999; Frankland & Bontempi, 2005; Green & Woodruff-Pak, 2000; Kaneko & Thompson, 1997; Kim, Clark, & Thompson, 1995; Port et al., 1986; Takehara, Kawahara, & Krino, 2003); why thalamic lesions retard the acquisition of trace conditioning (Powell & Churchwell, 2002), but have less of a statistically significant effect on delay conditioning (Buchanan & Thompson, 1990); why early but not late amygdala lesions degrade both delay conditioning (Lee & Kim, 2004) and trace conditioning (Büchel et al., 1999); why hippocampal lesions degrade recent but not temporally remote trace conditioning (Kim et al., 1995; Takehara et al., 2003); why in delay conditioning, such lesions typically have no negative impact on CR performance but this finding may vary with experimental preparation and CR success criteria (Berger, 1984; Chen et al., 1995; Lee & Kim, 2004; Port, 1985; Shors, 1992; Moustafa, et al., 2013); why cortical lesions degrade temporally remote but not recent trace conditioning, but have no impact on the acquisition of delay conditioning (Frankland & Bontempi, 2005; Kronforst-Collins & Disterhoft, 1998; McLaughlin et al., 2002; Takehara et al., 2003; see also, Oakley & Steele Russell, 1972; Yeo, Hardiman, Moore, & Steele Russell,. 1984); how temporally-graded amnesia may be caused by ablation of the medial prefrontal cortex after memory consolidation (Simon, Knuckley, Churchwell, & Powell, 2005; Takehara et al., 2003; Weible, McEchron, & Disterhoft, 2000); how attention and consciousness are linked during delay and trace conditioning (Clark, Manns, & Squire, 2002; Clark & Squire, 1998, 2010); and how neurotrophins, notably brain-derived neurotrophic factor (BDNF), influence memory formation and consolidation (Kokaia et al., 1993, Tyler et al., 2002).

The article does not attempt to explain all aspects of memory consolidation, although its proposed explanations may help to do so in future studies. One reason for this is that the prefrontal cortex and hippocampus, which figure prominently in model explanations, carry out multiple functions (see section ‘Clinical relevance of BDNF). The model only attempts to explain how an interacting subset of these mechanisms contribute to conditioning and memory consolidation. Not considered, for example, are sequence-dependent learning, which depends on prefrontal working memories and list chunking dynamics (cf. compatible models for such processes in Grossberg & Kazerounian, 2016; Grossberg & Pearson, 2008; and Silver et al., 2011), or spatial navigation, which depends upon entorhinal grid cells and hippocampal place cells (cf. compatible models in Grossberg & Pilly, 2014; Pilly & Grossberg, 2012). In addition, the model does not attempt to simulate properties such as hippocampal replay, which require an analysis of sequence-dependent learning, including spatial navigation, for their consideration, or finer neurophysiological properties such the role of sleep, sharp wave ripples, and spindles in memory consolidation (see Albouy, King, Maquet, & Doyon, 2013, for a review).

Data about brain activity during sleep provide further evidence about learning processes that support memory consolidation. These processes begin with awake experience and may continue during sleep where there are no external stimuli that support learning (Kali & Dayan, 2004; Wilson, 2002). The activity generated during waking in the hippocampus is reproduced in sequence during rapid eye movement (REM) sleep with the same time scale as the original experiences, lasting tens of seconds to minutes (Louie & Wilson, 2001), or is compressed during slow-wave sleep (Nádasdy et al. 1999). During sleep, slow waves appear to be initiated in hippocampal CA3 (Siapas, Lubenov, & Wilson, 2005; Wilson & McNaughton, 1994), and hippocampal place cells tend to fire as though neuronal states were being played back in their previously experienced sequence as part of the memory consolidation process (Ji & Wilson, 2007; Qin, McNaughton, Skaggs, & Barnes, 1997; Skaggs & McNaughton, 1996; Steriade, 1999; Wilson & McNaughton, 1994). Relevant to the nSTART analysis are the facts that, during sleep, the interaction of hippocampal cells with cortex leads to neurotrophic expression (Hobson & Pace-Schott, 2002; Monteggia et al., 2004), and that similar sequential, self-organizing ensembles that are based on experience may also exist in various areas of the neocortex (Ji & Wilson, 2007; Maquet et al., 2000; cf. Deadwyler, West, & Robinson, 1981; Schoenbaum & Eichenbaum, 1995). With the nSTART analyses of neurotrophically-modulated memory consolidation as a function, these sleep- and sequence-dependent processes, which require substantial additional model development, can be more easily understood.

Unifying three basic competences

The model reconciles three basic behavioral competences. Its explanatory power is illustrated by the fact that these basic competences are self-evident, but the above data properties are not. All three competences involve the brain’s ability to adaptively time its learning processes in a task-appropriate manner.

First, the brain needs to pay attention quickly to salient events, both positive and negative. However, such a rapid attention shift to focus on a salient event creates the risk of prematurely responding to that event, or of prematurely resetting and shifting the attentional focus to a different event before the response to that event could be fully executed. As explained below, this fast motivated attention pathway includes the amygdala. These potential problems of a fast motivated attention shift are alleviated by the second and third competences.

Second, the brain needs to be able to adaptively time and maintain motivated attention on a salient event until an appropriate response is executed. The ability to maintain motivated attention for an adaptively timed interval on the salient event involves the hippocampus, notably its dentate-CA3 region (Berger, Clark, & Thompson, 1980). Recent data have further developed this theme through the discovery of hippocampal “time cells” (Kraus et al., 2013; MacDonald et al., 2011).

Third, the brain needs to be able to adaptively time and execute an appropriate response to the salient event. The ability to execute an adaptively timed behavioral response always involves the cerebellum (Christian & Thompson, 2003; Fiala, Grossberg, & Bullock, 1996; Green & Woodruff-Pak, 2000; Ito, 1984). When the timing contingencies involve a relatively long trace conditioning ISI, or the onset of the US in delay conditioning is sufficiently delayed, then the hippocampus may also be required due to higher cognitive demand (Beylin, Gandhi, Wood, Talk, Matzel, & Shors, 2001).

How the brain may realize these three competences, along with data supporting these hypotheses, has been described in articles about the Spectrally Timed Adaptive Resonance Theory (START) model of Grossberg & Merrill (1992, 1996). A variation of the START model in which several of its mechanisms are out of balance is called the Imbalanced START, or iSTART, model that has been used to describe possible neural mechanisms of autism (Grossberg & Seidman, 2006). START mechanisms have also been used to offer mechanistic explanations of various symptoms of schizophrenia (Grossberg, 2000b). The current neurotrophic START, or nSTART, model builds upon this foundation. The nSTART model further develops the START model to refine the anatomical interactions that are described in START, to clarify how adaptively timed learning and memory consolidation depend upon neurotrophins acting within several of these anatomical interactions, and to explain using this expanded model how various brain lesions to areas involved in eyeblink conditioning may cause abnormal learning and memory.

nSTART model of adaptively timed eyeblink conditioning

Neural pathways that support the conditioned eyeblink response involve various hierarchical and parallel circuits (Thompson, 1988; Woodruff-Pak & Steinmetz, 2000a, 2000b). The nSTART macrocircuit (Fig. 2) simulates key processes that exist within the wider network that supports the eyeblink response in vivo and highlights circuitry required for adaptively timed trace conditioning. Thalamus and sensory cortex are lumped into one sensory cortical representation for representational simplicity. However, the exposition of the model and its output pathways will require discussion of independent thalamocortical and corticocortical pathways. Different experimental manipulations affect brain regions like the thalamus, cortex, amygdala, and hippocampus in different ways. Our model computer simulations illustrate these differences. In addition, it is important to explain how these several individual responses of different brain regions contribute to a final common path the activity of which covaries with observed conditioned responses. Outputs from these brain regions meet directly or indirectly at the pontine nucleus, the final common bridge to the cerebellum which generates the CR (Freeman & Muckler, 2003; Kalmbach et al. 2009a, b; Siegel et al., 2012; Woodruff-Pak & Disterhoft, 2007). Simulations of how the model pontine nucleus responds to the aggregate effect of all the other brain regions are thus also provided. The internal dynamics of the cerebellum are not, however, simulated in the nSTART model; but see Fiala, Grossberg, and Bullock (1996) for a detailed cerebellar learning model that simulates how Ca++ can modulate mGluR dynamics to adaptively time responses across long ISIs.

Normal and amnesic delay conditioning and trace conditioning

The ability to associatively learn what subset of earlier events predicts, or causes, later consequences, and what event combinations are not predictive, is a critical survival competence in normal adaptive behavior. In this section, data are highlighted that describe the differences between the normal and abnormal acquisition and retention of associative learning relative to the specific role of interactions among the processing areas in nSTART’s functional anatomy; notably, interactions between sensory cortex and thalamus, prefrontal cortex, amygdala, and hippocampus. See ‘Methods ,’ for an exposition of design principles and heuristic modeling concepts that go into the nSTART model; Model description ,’ for a non-technical exposition of the model processes and their interactions; ‘Results,’ for model simulations of data; Discussion ,’ for a general summary; and ‘Mathematical Equations and Parameters,’ for a complete summary of the model mechanisms.

Lesion data show that delay conditioning requires the cerebellum but does not need the hippocampus to acquire an adaptively timed conditioned response. Studies of hippocampal lesions in rats, rabbits, and humans reveal that, if a lesion occurs before delay conditioning (Daum, Schugens, Breitenstein, Topka, & Spieker, 1996; Ivkovich & Thompson, 1997; Schmaltz & Theios, 1972; Solomon & Moore, 1975; Weiskrantz & Warrington, 1979;), or any time after delay conditioning (Akase, Alkon, & Disterhoft, 1989; Orr & Berger, 1985; Port et al., 1986), the subject can still acquire or retain a CR. Depending on the performance criteria, sometimes the acquisition is reported as facilitated (Berger, 1984; Chen, 1995; Lee & Kim, 2004; Port, 1985; Shors, 1992).

Lee and Kim (2004) presented electromyography (EMG) data showing that amygdala lesions in rats decelerated delay conditioning if made prior to training, but not if made post-training, while hippocampal lesions accelerated delay conditioning if made prior to training. They found a time-limited role of the amygdala similar to the time-limited role of the hippocampus: The amygdala is more active during early acquisition than later. In addition, they found that the amygdala without the hippocampus is not sufficient for trace conditioning. During functional magnetic resonance imaging (fMRI) studies of human trace conditioning, Büchel et al. (1999) also found decreases in amygdala responses over time. They cited other fMRI studies that found robust hippocampal activity in trace conditioning, but not delay conditioning, to underscore their hypothesis that, while the amygdala may contribute to trace conditioning, the hippocampus is required. Chau and Galvez (2012) discussed the likelihood of the same time-limited involvement of the amygdala in trace eyeblink conditioning.

Holland and Gallagher (1999) reviewed literature describing the role of the amygdala as either modulatory or required, depending on specific connections with other brain systems, for normal “functions often characterized as attention, reinforcement and representation” (p. 66). Aggleton and Saunders (2000) described the amygdala in terms of four functional systems (accessory olfactory, main olfactory, autonomic, and frontotemporal). In the macaque monkey, ten interconnected cytotonic areas were defined within the amygdala, with 15 types of cortical inputs and 17 types of cortical projections, and 22 types of subcortical inputs from the amygdala and 15 types of subcortical projections to the amygdala (their Figs. 1.2–1.7, pp. 4–9). Given this complexity, the data are mixed about whether the amygdala is required for acquisition, or retention after consolidation, depending on the cause (cytotoxin, acid or electronic burning, cutting), target area, and degree of lesion, as well as the strength of the US, learning paradigm, and specific task (Blair, Sotres-Bayon, Moiya, & LeDoux, 2005; Cahill & McGaugh, 1990; Everitt, Cardinal, Hall, Parkinson, & Robbins, 2000; Kapp, Wilson, Pascoe, Supple, & Whalen, 1990; Killcross, Everitt, & Robbins, 1997; Lehmann, Treit, & Parent, 2000; Medina, Repa, Mauk, & LeDoux, 2002; Neufeld & Mintz, 2001; Oswald, Maddox, Tisdale, & Powell, 2010; Vazdarjanova & McGaugh, 1998). In fact, "…aversive eyeblink conditioning…survives lesions of either the central or basolateral parts of the amygdala" (Thompson et al. 1987). Additionally, such lesions have been found not to prevent Pavlovian appetitive conditioning or other types of appetitively-based learning (McGaugh, 2002, p.456).

These inconsistencies among the data may exist due to the contributions from multiple pathways that support emotion. For example, within the MOTIVATOR model extension of the CogEM model (see below), hypothalamic and related internal homeostatic and drive circuits may function without amygdala (Dranias et al., 2008). The nSTART model only incorporates an afferent cortical connection from the amygdala to represent incentive motivational learning signals. Within the cortex, however, the excitatory inputs from both the amygdala and hippocampus are modulated by the strength of thalamocortical signals.

A clear pattern emerges from comparing various data that disclose essential functions of the hippocampus, functions that are qualititatively simulated in nSTART. The hippocampus has been studied with regard to the acquisition of trace eyeblink conditioning, and the adaptive timing of conditioned responses (Berger, Laham, & Thompson, 1980; Mauk & Ruiz, 1992; Schmaltz & Theios, 1972; Sears & Steinmetz, 1990; Woodruff-Pak, 1993; Woodruff-Pak & Disterhoft, 2007). If a hippocampal lesion or other system disruption occurs before trace conditioning acquisition (Ivkovich & Thompson, 1997; Kaneko & Thompson, 1997; Weiss & Thompson, 1991b; Woodruff-Pak, 2001), or shortly thereafter (Kim et al., 1995; Moyer, Deyo, & Disterhoft, 1990; Takehara et al., 2003), the CR is not obtained or retained. Trace conditioning is impaired by pre-acquisition hippocampal lesions created during laboratory experimentation on animals (Anagnostaras, Maren, & Fanselow, 1999; Berry & Thompson, 1979; Garrud et al., 1984; James, Hardiman, & Yeo, 1987; Kim et al., 1995; Orr & Berger, 1985; Schmajuk, Lam, & Christiansen, 1994; Schmaltz & Theios, 1972; Solomon & Moore, 1975), and in humans with amnesia (Clark & Squire, 1998; Gabrieli et al., 1995; McGlinchey-Berroth, Carrillo, Gabrieli, Brawn, & Disterhoft, 1997), Alzheimer’s disease, or age-related deficits (Little, Lipsitt, & Rovee-Collier, 1984; Solomon et al., 1990; Weiss & Thompson, 1991a; Woodruff-Pak 2001).

The data show that, during trace conditioning, there is successful post-acquisition performance of the CR only if the hippocampal lesion occurs after a critical period of hippocampal support of memory consolidation within the neocortex (Kim et al., 1995; Takashima et al., 2009; Takehara et al., 2003). Data from in vitro cell preparations also support the time-limited role of the hippocampus in new learning that is simulated in nSTART: activity in hippocampal CA1 and CA3 pyramidal neurons peaked 24 h after conditioning was completed and decayed back to baseline within 14 days (Thompson, Moyer, & Disterhoft, 1996). The effect of early versus late hippocampal lesions is challenging to explain since no overt training occurs after conditioning during the period before hippocampal ablation.

After consolidation due to hippocampal involvement is accomplished, thalamocortical signals in conjunction with the cerebellum determine the timed execution of the CR during performance (Gabreil, Sparenborg, & Stolar, 1987; Sosina, 1992). Indeed, “…there are two memory circuitries for trace conditioning. One involves the hippocampus and the cerebellum and mediates recently acquired memory; the other involves the mPFC and the cerebellum and mediates remotely acquired memory” (Takehara et al., 2003, p. 9904; see also Berger, Weikart, Basset, & Orr, 1986; O'Reilly et al., 2010). nSTART qualitatively models these data as follows: after the consolidation of memory, when there is no need for hippocampus, nSTART models the cortical connections to the pontine nuclei that serve to elicit conditioned responses by way of the cerebellum (Siegel, Kalmback, Chitwood, & Mauk, 2012; Woodruff-Pak & Disterhoft, 2007).

Based on the extent and timing of hippocampal damage, learning impairments range from needing more training trials than normal in order to learn successfully, through persistent response-timing difficulties, to the inability to learn and form new memories. The nSTART model explains the need for the hippocampus during trace conditioning in terms of how the hippocampus supports strengthening of partially conditioned thalamocortical and cortiocortical connections during memory consolidation (see Fig. 2). The hippocampus has this ability because it includes circuits that can bridge the temporal gaps between CS and US during trace conditioning, unlike the amygdala, and can learn to adaptively time these temporal gaps in its responses, as originally simulated in the START model (Grossberg & Merrill, 1992, 1996; Grossberg & Schmajuk, 1989). The current nSTART model extends this analysis using mechanisms of endogenous hippocampal activation and BDNF modulation (see below) to explain the time-limited role of the hippocampus in terms of its support of the consolidation of new learning into long-term memories. This hypothesis is elaborated and contrasted with alternative models of memory consolidation below (‘Multiple hippocampal functions: Space, time, novelty, consolidation, and episodic learning’).

Conditioning and consciousness

Several studies of humans have described a link between consciousness and conditioning. Early work interpreted conscious awareness as another class of conditioned responses (Grant, 1973; Hilgard, Campbell, & Sears, 1937; Kimble, 1962; McAllister & McAllister, 1958). More recently, it was found that, while amnesic patients with hippocampal damage acquired delay conditioning at a normal rate, they failed to acquire trace conditioning (Clark & Squire, 1998). These experimenters postulated that normal humans acquire trace conditioning because they have intact declarative or episodic memory and, therefore, can demonstrate conscious knowledge of a temporal relationship between CS and US: “trace conditioning requires the acquisition and retention of conscious knowledge” (p. 79). They did not, however, discuss mechanisms underlying this ability, save mentioning that the neocortex probably represents temporal relationships between stimuli and “would require the hippocampus and related structures to work conjointly with the neocortex” (p.79).

Other studies have also demonstrated a link between consciousness and conditioning (Gabrieli et al., 1995; McGlinchey-Berroth, Brawn, & Disterhoft, 1999; McGlinchey-Berroth et al., 1997) and described an essential role for awareness in declarative learning, but no necessary role in non-declarative or procedural learning, as illustrated by experimental findings related to trace and delay conditioning, respectively (Manns, Clark, & Squire, 2000; Papka, Ivry, & Woodruff-Pak, 1997). For example, trace conditioning is facilitated by conscious awareness in normal control subjects while delay conditioning is not, whereas amnesics with bilateral hippocampal lesions perform at a success rate similar to unaware controls for both delay and trace conditioning (Clark, Manns, & Squire, 2001). Amnesics were found to be unaware of experimental contingencies, and poor performers on trace conditioning (Clark & Squire, 1998). Thus, the link between adaptive timing, attention, awareness, and consciousness has been experimentally established within the trace conditioning paradigm. The nSTART model traces the link between consciousness and conditioning to the role of hippocampus in supporting a sustained cognitive-emotional resonance that underlies motivated attention, consolidation of long-term memory, core consciousness, and "the feeling of what happens" (Damasio, 1999).

Brain-derived neurotrophic factor (BDNF) in memory formation and consolidation

Memory consolidation, a process that supports an enduring memory of new learning, has been extensively studied: (McGaugh, 2000, 2002; Mehta, 2007; Nadel & Bohbot, 2001; Takehara, Kawahara, & Krino, 2003; Squire & Alverez, 1995; Takashima, 2009; Thompson, Moyer, & Disterhoft, 1996; Tyler, et al. 2002). These data show time-limited involvement of the limbic system, and long-term involvement of the neocortex. The question of what sort of process occurs during the period that actively strengthens memory, even when there is no explicit practice, has been linked to the action of neurotrophins (Zang, et al., 2007), especially BDNF, a complex class of proteins that have important effects on learning and memory (Heldt, Stanek, Chhatwal, & Ressler, 2007; Hu & Russek, 2008; Monteggia et al., 2004; Purves, 1988; Rattiner, Davis, & Ressler, 2005; Schuman, 1999; Thoenen, 1995; Tyler, Alonso, Bramham, & Pozzo-Miller, 2002). Postsynaptically, neurotrophins enhance responsiveness of target synapses (Kang & Schuman, 1995; Kohara, Kitamura, Morishima, & Tsumoto, 2001) and allow for quicker processing (Knipper et al., 1993; Lessman, 1998). Presynaptically, they act as retrograde messengers (Davis & Murphy, 1994; Ganguly, Koss, & Poo, 2000) coming from a target cell population back to excitatory source cells and increasing the flow of transmitter from the source cell population to generate a positive feedback loop between the source and the target cells (Schinder, Berninger, & Poo, 2000), as also occurs in some neural models of learning and memory search (e.g., Carpenter & Grossberg, 1990). BDNF has also been interpreted as an essential component of long-term potentiation (LTP) in normal cell processing (Chen, Kolbeck, Barde, Bonhoeffer, & Kossel, 1999; Korte et al., 1995; Phillips et al., 1990). The functional involvement of existing BDNF receptors is critical in early LTP (up to 1 h) during the acquisition phase of learning the CR, whereas continued activation of the slowly decaying late phase LTP signal (3+ h) requires new protein synthesis and gene expression. Rossato et al. (2009) have shown that hippocampal dopamine and the ventral tegmental area provide a temporally sensitive trigger for the expression of BDNF that is essential for long-term consolidation of memory related to reinforcement learning.

The BDNF response to a particular stimulus event may vary from microseconds (initial acquisition) to several days or weeks (long-term memory consolidation); thus, neurotrophins have a role whether the phase of learning is one of initial synaptic enhancement or long-term memory consolidation (Kang, Welcher, Shelton, & Schuman, 1997; Schuman, 1999; Singer, 1999). Furthermore, BDNF blockade shows that BDNF is essential for memory development at different phases of memory formation (Kang et al., 1997), and during all ages of an individual (Cabelli, Hohn, & Shatz, 1995; Tokuka, Saito, Yorifugi, Kishimoto, & Hisanaga, 2000). As nSTART qualitatively simulates, neurotrophins are thus required for both the initial acquisition of a memory and for its ongoing maintenance as memory consolidates.

BDNF is heavily expressed in the hippocampus as well as in the neocortex, where neurotrophins figure largely in activity-dependent development and plasticity, not only to build new bridges as needed, but also to inhibit and dismantle old synaptic bridges. A process of competition among axons during the development of nerve connections (Bonhoffer, 1996; Tucker, Meyer, & Barde, 2001; van Ooyen & Willshaw, 1999; see review in Tyler et al., 2002), exists both in young and mature animals (Phillips, Hains, Laramee, Rosenthal, & Winslow, 1990). BDNF also maintains cortical circuitry for long-term memory that may be shaped by various BDNF-independent factors during and after consolidation (Gorski, Zeiler, Tamowski, & Jones, 2003).

The nSTART model hypothesizes how BDNF may amplify and temporally extend activity-based signals within the hippocampus and the neocortex that facilitate endogenous strengthening of memory without further explicit learning. In particular, memory consolidation may be mechanistically achieved by means of a sustained cascade of BDNF expression beginning in the hippocampus and spreading to the cortex (Buzsáki & Chrobak, 2005; Cousens & Otto, 1998; Hobson & Pace-Schott, 2002; Monteggia, et al., 2004; Nádasdy, Hirase, Czurkó, Csicsvari, & Buzsáki, 1999; Smythe, Colom, & Bland, 1992; Staubli & Lynch, 1987; Vertes, Hoover, & Di Prisco, 2004), which is modeled in nSTART by the maintained activity level of hippocampal and cortical BDNF after conditioning trials end (see Fig. 2).

Hippocampal bursting activity is not the only bursting activity that drives consolidation. Long-term activity-dependent consolidation of new learning is also supported by the synchronization of thalamocortical interactions in response to thalamic or cortical inputs (Llinas, Ribary, Joliot, & Wang, 1994; Steriade, 1999). Thalamic bursting neurons may lead to synaptic modifications in cortex, and cortex can in turn influence thalamic oscillations (Sherman & Guillery, 2003; Steriade, 1999). Thalamocortical resonance has been described as a basis for temporal binding and consciousness in increasingly specific models over the years. These models simulate how specific and nonspecific thalamic nuclei interact with the reticular nucleus and multiple stages of laminar cortical circuitry (Buzsáki, Llinás, Singer, Berthoz, & Christen, 1994; Engel, Fries, & Singer, 2001; Grossberg, 1980, 2003, 2007; Grossberg & Versace, 2008; Pollen, 1999; Yazdanbakhsh & Grossberg, 2004). nSTART qualitatively explains consolidation without including bursting phenomena, although oscillatory dynamics of this kind arise naturally in finer spiking versions of rate-based models such as nSTART (Grossberg & Versace, 2008; Palma, Grossberg, & Versace, 2012a, 2012b).

The nSTART model focuses on amygdala and hippocampal interactions with thalamus and neocortex during conditioning (Fig. 2). The model proposes that the hippocampus supports thalamo-cortical and cortico-cortical category learning that becomes well established during memory consolidation through its endogenous (bursting) activity (Siapas, Lubenov, & Wilson, 2005; Sosina, 1992) that is supported by neurotrophin mediators (Destexhe, Contreras & Steriade, 1998). nSTART proposes that thalamo-cortical sustained activity is maintained through the combination of two mechanisms: the level of cortical BDNF activity, and the strength of the learned thalamo-cortical adaptive weights, or long-term memory (LTM) traces that were strengthened by the memory consolidation process. This proposal is consistent with trace conditioning data showing that, after consolidation, when the hippocampus is no longer required for performance of CRs, the medial prefrontal cortex takes on a critical role for performance of the CR in reaction to the associated thalamic sensory input, Here, the etiology of retrograde amnesia is understood as a failure to retain memory, rather than as a failure of adaptive timing (Takehara et al., 2003).

Methods

From CogEM to nSTART

The nSTART model synthesizes and extends key principles, mechanisms, and properties of three previously published brain models of conditioning and behavior. These three models describe aspects of:

  1. How the brain learns to categorize objects and events in the world (Carpenter & Grossberg, 1987, 1991, 1993; Grossberg, 1976a, 1976b, 1980, 1982, 1984, 1987, 1999, 2013; Raizada & Grossberg, 2003); this is described within Adaptive Resonance Theory, or ART;

  2. How the brain learns the emotional meanings of such events through cognitive-emotional interactions, notably rewarding and punishing experiences, and how the brain determines which events are motivationally predictive, as during attentional blocking and unblocking (Dranias, Grossberg, & Bullock, 2008; Grossberg, 1971, 1972a, 1972b, 1980, 1982, 1984, 2000b; Grossberg, Bullock, & Dranias, 2008; Grossberg & Gutowski, 1987; Grossberg & Levine, 1987; Grossberg & Schmajuk, 1987); this is described within the Cognitive-Emotional-Motor, or CogEM, model; and

  3. How the brain learns to adaptively time the attention that is paid to motivationally important events, and when to respond to these events, in a context-appropriate manner (Fiala, Grossberg, & Bullock, 1996; Grossberg & Merrill, 1992, 1996; Grossberg & Paine, 2000; Grossberg & Schmajuk, 1989); this is described within the START model.

All three component models have been mathematically and computationally characterized elsewhere in order to explain behavioral and brain data about normal and abnormal behaviors. The principles and mechanisms that these models employ have thus been independently validated through their ability to explain a wide range of data. nSTART builds on this foundation to explain data about conditioning and memory consolidation, as it is affected by early and late amygdala, hippocampal, and cortical lesions, as well as BDNF expression in the hippocampus and cortex. The exposition in this section heuristically states the main modeling concepts and mechanisms before building upon them to mathematically realize the current model advances and synthesis.

The simulated data properties emerge from interactions of several brain regions for which processes evolve on multiple time scales, interacting in multiple nonlinear feedback loops. In order to simulate these data, the model incorporates only those network interactions that are rate-limiting in generating the targeted data. More detailed models of the relevant brain regions, that are consistent with the model interactions simulated herein, are described below, and provide a guide to future studies aimed at incorporating a broader range of functional competences.

Adaptive resonance theory

The first model upon which nSTART builds is called Adaptive Resonance Theory, or ART. ART is reviewed because a key process in nSTART is a form of category learning, and also because nSTART simulates a cognitive-emotional resonance that is essential for explaining its targeted data. ART proposes how the brain can rapidly learn to attend, recognize, and predict new objects and events without catastrophically forgetting memories of previously learned objects and events. This is accomplished through an attentive matching process between the feature patterns that are created by stimulus-driven bottom-up adaptive filters, and learned top-down expectations (Fig. 3). The top-down expectations, acting by themselves, can also prime the brain to anticipate future bottom-up feature patterns with which they will be matched.

Fig. 3.

Fig. 3

How ART searches for and learns a new recognition category using cycles of match-induced resonance and mismatch-induced reset. Active cells are shaded gray; inhibited cells are not shaded. (a) Input pattern I is instated across feature detectors at level F 1 as an activity pattern X, at the same time that it generates excitatory signals to the orienting system A with a gain ρ that is called the vigilance parameter. Activity pattern X generates inhibitory signals to the orienting system A as it generates a bottom-up input pattern S to the category level F 2. A dynamic balance within A between excitatory inputs from I and inhibitory inputs from S keeps A quiet. The bottom-up signals in S are multiplied by learned adaptive weights to form the input pattern T to F 2. The inputs T are contrast-enhanced and normalized within F 2 by recurrent lateral inhibitory signals that obey the membrane equations of neurophysiology, otherwise called shunting interactions. This competition leads to selection and activation of a small number of cells within F 2 that receive the largest inputs. In this figure, a winner-take-all category is chosen, represented by a single cell (population). The chosen cells represent the category Y that codes for the feature pattern at F 1. (b) The category activity Y generates top-down signals U that are multiplied by adaptive weights to form a prototype, or critical feature pattern, V that encodes the expectation that the active F 2 category has learned for what feature pattern to expect at F 1. This top-down expectation input V is added at F 1 cells. If V mismatches I at F 1, then a new STM activity pattern X* (the gray pattern), is selected at cells where the patterns match well enough. In other words, X* is active at I features that are confirmed by V. Mismatched features (white area) are inhibited. When X changes to X*, total inhibition decreases from F 1 to A. (c) If inhibition decreases sufficiently, A releases a nonspecific arousal burst to F 2; that is, “novel events are arousing”. Within the orienting system A, a vigilance parameter ρ determines how bad a match will be tolerated before a burst of nonspecific arousal is triggered. This arousal burst triggers a memory search for a better-matching category, as follows: Arousal resets F 2 by inhibiting Y. (d) After Y is inhibited, X is reinstated and Y stays inhibited as X activates a different category, that is represented by a different activity winner-take-all category Y*, at F 2.. Search continues until a better matching, or novel, category is selected. When search ends, an attentive resonance triggers learning of the attended data in adaptive weights within both the bottom-up and top-down pathways. As learning stabilizes, inputs I can activate their globally best-matching categories directly through the adaptive filter, without activating the orienting system [Adapted with permission from Carpenter and Grossberg (1987)]

In nSTART, it is assumed that each CS and US is familiar and has already undergone category learning before the current simulations begin. The CS and US inputs to sensory cortex in the nSTART macrocircuit are assumed to be processed as learned object categories (Fig. 2). nSTART models a second stage of category learning from an object category in sensory cortex to an object-value category in orbitofrontal cortex. In general, each object category can become associated with more than one object-value category, so the same sensory cue can learn to generate different conditioned responses in response to learning with different reinforcers. It does this by learning to generate different responses when different value categories are active. These adaptive connections are thus, in general, one-to-many. Conceptually, the two stages of learning, at the object category stage and the object-value category stage, can be interpreted as a coordinated category learning process through which the orbitofrontal cortex categorizes objects and their motivational significance (Barbas, 1995, 2007; Rolls, 1998, 2000). The current model simulates such conditioning with only a single type of reinforcer. Strengthening the connection from object category to object-value category represents a simplified form of this category learning process in the current model simulations. One-to-many learning from an object category to multiple object-value categories is simulated in Chang, Grossberg, and Cao (2014).

As in other ART models, a top-down expectation pathway also exists from the orbitofrontal cortex to the sensory cortex. It provides top-down attentive modulation of sensory cortical activity, and is part of the cortico-cortico-amygdalar-hippocampal resonance that develops in the model during learning. This cognitive-emotional resonance, which plays a key role in the current model and its simulations, as well as its precursors in the START and iSTART models, is the main reason that nSTART is considered to be part of the family of ART models. Indeed, Grossberg (2016) summarizes an emerging classification of brain resonances that support conscious seeing, hearing, feeling, and knowing that includes this cognitive-emotional resonance.

nSTART explains how this cognitive-emotional resonance is sustained through time by adaptively-timed hippocampal feedback signals (Fig. 2). This hippocampal feedback plays a critical role in the model’s explanation of data about memory consolidation, and its ability to explain how the brain bridges the temporal gap between stimuli that occur in experimental paradigms like trace conditioning. Consolidation is complete within nSTART when the hippocampus is no longer needed to further strengthen the category memory that is activated by the CS. Finally, the role of the hippocampus in sustaining the cognitive-emotional resonances helps to explain the experimentally reported link between conditioning and consciousness (Clark & Squire, 1998).

In a complete ART model, when a sufficiently good match occurs between a bottom-up input pattern and an active top-down expectation, the system locks into a resonant state that focuses attention on the matched features and drives learning to incorporate them into the learned category; hence the term adaptive resonance. ART also predicts that all conscious states are resonant states, and the Grossberg (2016) classification of resonances contributes to clarifying their diverse functions throughout the brain. Such an adaptive resonance is one of the key mechanisms whereby ART ensures that memories are dynamically buffered against catastrophic forgetting. As noted above, a simplified form of this attentive matching process is included in nSTART in order to explain the cognitive-emotional resonances that support memory consolidation and the link between conditioning and consciousness.

In addition to the attentive resonant state itself, a hypothesis testing, or memory search, process in response to unexpected events helps to discover predictive recognition categories with which to learn about novel environments, and to switch attention to new inputs within a known environment. This hypothesis testing process is not simulated herein because the object categories that are activated in response to the CS and US stimuli are assumed to already have been learned, and unexpected events are minimized in the kinds of highly controlled delay and trace conditioning experiments that are the focus of the current study.

For the same reason, another mechanism that is important during hypothesis testing is not included in nSTART. The degree of match between bottom-up and top-down signal patterns that is required for resonance, sustained attention, and learning to occur is set by a vigilance parameter (Carpenter & Grossberg, 1987) (see ρ in Fig. 3a). Vigilance may be increased by predictive errors, and controls whether a particular learned category will represent concrete information, such as a particular view of a particular face, or abstract information, such as the fact that everyone has a face. Low vigilance allows the learning of general and abstract recognition categories, whereas high vigilance forces the learning of specific and concrete categories. The current simulations do not need to vary the degree of abstractness of the categories to be learned, so vigilance control has been omitted for simplicity.

A big enough mismatch designates that the selected category does not represent the input data well enough, and drives a memory search, or hypothesis testing, for a category that can better represent the input data. In a more complete nSTART model, hypothesis testing would enable the learning and stable memory of large numbers of thalamo-cortical and cortico-cortical recognition categories. Such a hypothesis testing process includes a novelty-sensitive orienting system A, which is predicted to include both the nonspecific thalamus and the hippocampus (Fig. 3c; Carpenter & Grossberg, 1987, 1993; Grossberg, 2013; Grossberg & Versace, 2008). In nSTART, the model hippocampus does include the crucial process of adaptively timed learning that can bridge temporal gaps of hundreds of milliseconds to support trace conditioning and memory consolidation. In a more general nSTART model that is capable of self-stabilizing its learned memories, the hippocampus would also be involved in the memory search process.

In an ART model that includes memory search, when a mismatch occurs, the orienting system is activated and generates nonspecific arousal signals to the attentional system that rapidly reset the active recognition categories that have been reading out the poorly matching top-down expectations (Fig. 3c). The cause of the mismatch is hereby removed, thereby freeing the bottom-up filter to activate a different recognition category (Fig. 3d). This cycle of mismatch, arousal, and reset can repeat, thereby initiating a memory search, or hypothesis testing cycle, for a better-matching category. If no adequate match with a recognition category exists, say because the bottom-up input represents an unfamiliar experience, then the search process automatically activates an as yet uncommitted population of cells, with which to learn a new recognition category to represent the novel information.

All the learning and search processes that ART predicted have received support from behavioral, ERP, anatomical, neurophysiological, and/or neuropharmacological data, which are reviewed in the ART articles listed above; see, in particular, Grossberg (2013). Indeed, the role of the hippocampus in novelty detection has been known for many years (Deadwyler, West, & Lynch, 1979; Deadwyler et al., 1981; Vinogradova, 1975). In particular, the hippocampal CA1 and CA3 regions have been shown to be involved in a process of comparison between a prior conditioned stimulus and a current stimulus by rats in a non-spatial auditory task, the continuous non-matching-to-sample task (Sakurai, 1990). During performance of the task, single unit activity was recorded from several areas: CA1 and CA3, dentate gyrus (DG), entorhinal cortex, subicular complex, motor cortex (MC), prefrontal cortex, and dorsomedial thalamus. Go and No-Go responses indicated, respectively, whether the current tone was perceived as the same as (match) or different from (non-match) the preceding tone. Since about half of the units from the MC, CA1, CA3, and DG had increments of activity immediately prior to a Go response, these regions were implicated in motor or decisional aspects of making a match response. On non-match trials, units were also found in CA1 and CA3 with activity correlated to a correct No-Go response. Corroborating the function of the hippocampus in recognition memory, but not in storing the memories themselves, Otto and Eichenbaum (1992) reported that CA1 cells compare cortical representations of current perceptual processes to previous representations stored in parahippocampal and neocortical structures to detect mismatch in an odor-guided task. They noted that “the hippocampus maintains neither active nor passive memory representations” (p. 332).

Grossberg and Versace (2008) have proposed how the nonspecific thalamus can also be activated by novel events and trigger hypothesis testing. In their Synchronous Matching ART (SMART) model, a predictive error can lead to a mismatch within the nucleus basalis of Meynert, which releases acetylcholine broadly in the neocortex, leading to an increase in vigilance and a memory search for a better matching category. Palma, Grossberg, and Versace (2012a) and Palma, Versace, and Grossberg (2012b) further model how acetylcholine-modulated processes work, and explain a wide range of data using their modeling synthesis.

CogEM and MOTIVATOR models

Recognition categories can be activated when objects are experienced, but do not reflect the emotional or motivational value of these objects. Such a recognition category can, however, be associated through reinforcement learning with one or more drive representations, which are brain sites that represent internal drive states and emotions. Activation of a drive representation by a recognition category can trigger emotional reactions and incentive motivational feedback to recognition categories, thereby amplifying valued recognition categories with motivated attention as part of a cognitive-emotional resonance between the inferotemporal cortex, amygdala, and orbitofrontal cortex. When a recognition category is chosen in this way, it can trigger choice and release of actions that realize valued goals in a context-sensitive way.

Such internal drive states and motivational decisions are incorporated into nSTART using mechanisms from the second model, called the Cognitive-Emotional-Motor, or CogEM, model. CogEM simulates the learning of cognitive-emotional associations, notably associations that link external objects and events in the world to internal feelings and emotions that give these objects and events value (Fig. 3a and b). These emotions also activate the motivational pathways that energize actions aimed at acquiring or manipulating objects or events to satisfy them.

The CogEM model clarifies interactions between two types of homologous circuits: one circuit includes interactions between the thalamus, sensory cortex, and amygdala; the other circuit includes interactions between the sensory cortex, orbitofrontal cortex, and amygdala. The nSTART model (Fig. 2) simulates cortico-cortico-amygdalar interactions. At the present level of simplification, the same activation and learning dynamics could also simulate interactions between thalamus, sensory cortices, and the amygdala. In particular, the CogEM model proposes how emotional centers of the brain, such as the amygdala, interact with sensory and prefrontal cortices – notably the orbitofrontal cortex – to generate affective states, attend to motivationally salient sensory events, and elicit motivated behaviors. Neurophysiological data provide increasing support for the predicted role of interactions between the amygdala and orbitofrontal cortex in focusing motivated attention on cell populations that can select learned responses which have previously succeeded in acquiring valued goal objects (Baxter et al., 2000; Rolls, 1998, 2000; Schoenbaum, Setlow, Saddoris, & Gallagher, 2003).

In ART, resonant states can develop within sensory and cognitive feedback loops. Resonance can also occur within CogEM circuits between sensory and cognitive representations of the external world and emotional representations of what is valued by the individual. Activating the (sensory cortex)-(amygdala)-(prefrontal cortex) feedback loop between cognitive and emotional centers is predicted to generate a cognitive-emotional resonance that can support conscious awareness of events happening in the world and how we feel about them. This resonance tends to focus attention selectively upon objects and events that promise to satisfy emotional needs. Such a resonance, when it is temporally extended to also include the hippocampus, as described below, helps to explain how trace conditioning occurs, as well as the link between conditioning and consciousness that has been experimentally reported.

Figure 4a and b summarize the CogEM hypothesis that (at least) three types of internal representation interact during classical conditioning and other reinforcement learning paradigms: sensory cortical representations S, drive representations D, and motor representations M. These representations, and the learning that they support, are incorporated into the nSTART circuit (Fig. 2).

Fig. 4.

Fig. 4

(a) The simplest Cognitive-Emotional-Motor (CogEM) model: Three types of interacting representations (sensory, S; drive, D; and motor, M) that control three types of learning (conditioned reinforcer, incentive motivational, and motor) help to explain many reinforcement learning data. (b) In order to work well, a sensory representation S must have (at least) two successive stages, S(1) and S(2), so that sensory events cannot release actions that are motivationally inappropriate. The two successive stages of a sensory representation S are interpreted to be in the appropriate sensory cortex (corresponds to S(1)) and the prefrontal cortex, notably the orbitofrontal cortex (corresponds to S(2)). The prefrontal stage requires motivational support from a drive representation D such as amygdala, to be fully effective, in the form of feedback from the incentive motivational learning pathway. Amydgala inputs to prefrontal cortex cause feedback from prefrontal cortex to sensory cortex that selectively amplifies and focuses attention upon motivationally relevant sensory events, and thereby “attentionally blocks” irrelevant cues. [Reprinted with permission from Grossberg and Seidman (2006).] (c) The amygdala and basal ganglia work together, embodying complementary functions, to provide motivational support, focus attention, and release contextually appropriate actions to achieve valued goals. For example, the basal ganglia substantia nigra pars compacta (SNc) releases Now Print learning signals in response to unexpected rewards or punishments, whereas the amygdala generates incentive motivational signals that support the attainment of expected valued goal objects. The MOTIVATOR model circuit diagram shows cognitive-emotional interactions between higher-order sensory cortices and an evaluative neuraxis composed of the hypothalamus, amygdala, basal ganglia, and orbitofrontal cortex [Reprinted with permission from Dranias et al. (2008)]

Sensory representations S temporarily store internal representations of sensory events in short-term and working memory. Drive representations D are sites where reinforcing and homeostatic, or drive, cues converge to activate emotional responses. Motor representations M control the read-out of actions. In particular, the S representations are thalamo-cortical or cortico-cortical representations of external events, including the object recognition categories that are learned by inferotemporal and prefrontal cortical interactions (Desimone, 1991, 1998; Gochin, Miller, Gross, & Gerstein, 1991; Harries & Perrett, 1991; Mishkin, Ungerleider, & Macko, 1983; Ungerleider & Mishkin, 1982), and that are modeled by ART. Sensory representations temporarily store internal representations of sensory events, such as conditioned stimuli (CS) and unconditioned stimuli (US), in short-term memory via recurrent on-center off-surround networks that tend to conserve their total activity while they contrast-normalize, contrast-enhance, and store their input patterns in short-term memory (Fig. 4a and b).

The D representations include hypothalamic and amygdala circuits (Figs. 2 and 5) at which reinforcing and homeostatic, or drive, cues converge to generate emotional reactions and motivational decisions (Aggleton, 1993; Bower, 1981; Davis, 1994; Gloor et al., 1982; Halgren, Walter, Cherlow, & Crandall, 1978; LeDoux, 1993). The M representations include cortical and cerebellar circuits that control discrete adaptive responses (Evarts, 1973; Ito, 1984; Kalaska, Cohen, Hyde, & Prud’homme, 1989; Thompson, 1988). More complete models of the internal structure of these several types of representations have been presented elsewhere (e.g., Brown, Bullock, & Grossberg, 2004; Bullock, Cisek, & Grossberg, 1998; Carpenter & Grossberg, 1991; Contreras-Vidal, Grossberg, & Bullock, 1997; Dranias, Grossberg, & Bullock, 2008; Fiala, Grossberg, & Bullock, 1996; Gnadt & Grossberg, 2008; Grossberg, 1987; Grossberg, Bullock & Dranias, 2008; Grossberg & Merrill, 1996; Grossberg & Schmajuk, 1987; Raizada & Grossberg, 2003), and can be incorporated into future elaborations of nSTART without undermining any of the current model's conclusions.

Fig. 5.

Fig. 5

Orbital prefrontal cortex receives projections from the sensory cortices (visual, somatosensory, auditory, gustatory, and olfactory) and from the amygdala, which also receives inputs from the same sensory cortices. These anatomical stages correspond to the model CogEM stages in Fig. 4 [Reprinted with permission from Barbas (1995)]

nSTART does not incorporate the basal ganglia to simulate its targeted data, even though the basal ganglia and amygdala work together to provide motivational support, focus attention, and release contextually appropriate actions to achieve valued goals (Flores & Diserhoft, 2009). The MOTIVATOR model (Dranias et al., 2008; Grossberg et al., 2008) begins to explain how this interaction happens (Fig. 4c), notably how the amygdala and basal ganglia may play complementary roles during cognitive-emotional learning and motivated goal-oriented behaviors (Grossberg, 2000a). MOTIVATOR describes cognitive-emotional interactions between higher-order sensory cortices and an evaluative neuraxis composed of the hypothalamus, amygdala, basal ganglia, and orbitofrontal cortex. Given a conditioned stimulus (CS), the model amygdala and lateral hypothalamus interact to calculate the expected current value of the subjective outcome that the CS predicts, constrained by the current state of deprivation or satiation. As in the CogEM model, the amygdala relays the expected value information to orbitofrontal cells that receive inputs from anterior inferotemporal cells, and medial orbitofrontal cells that receive inputs from rhinal cortex. The activations of these orbitofrontal cells code the subjective values of objects. These values guide behavioral choices.

The model basal ganglia detect errors in CS-specific predictions of the value and timing of rewards. Excitatory inputs from the pedunculopontine nucleus interact with timed inhibitory inputs from model striosomes in the ventral striatum to regulate dopamine burst and dip responses from cells in the substantia nigra pars compacta and ventral tegmental area. Learning in cortical and striatal regions is strongly modulated by dopamine. The MOTIVATOR model is used to address tasks that examine food-specific satiety, Pavlovian conditioning, reinforcer devaluation, and simultaneous visual discrimination. Model simulations successfully reproduce discharge dynamics of known cell types, including signals that predict saccadic reaction times and CS-dependent changes in systolic blood pressure. In the nSTART model, these basal ganglia interactions are not needed to simulate the targeted data, hence will not be further discussed.

Even without basal ganglia dynamics, the CogEM model has successfully learned to control motivated behaviors in mobile robots (e.g., Baloch & Waxman, 1991; Chang & Gaudiano, 1998; Gaudiano & Chang, 1997; Gaudiano, Zalama, Chang, & Lopez-Coronado, 1996).

Three types of learning take place among the CogEM sensory, drive, and motor representations (Fig. 4a). Conditioned reinforcer learning enables sensory events to activate emotional reactions at drive representations. Incentive motivational learning enables emotions to generate a motivational set that biases the system to process cognitive information consistent with that emotion. Motor learning allows sensory and cognitive representations to generate actions. nSTART simulates both conditioned reinforcer learning, from thalamus to amygdala, or from sensory cortex to amygdala, as well as incentive motivational learning, from amygdala to sensory cortex, or from amygdala, to orbitofrontal cortex (Fig. 2). Instead of explicitly modeling motor learning circuits in the cerebellum, nSTART uses CR cortical and amygdala inputs to the pontine nucleus as indicators of the timing and strength of conditioned motor outputs (Freeman & Muckler, 2003; Kalmbach et al., 2009; Siegel et al., 2012; Woodruff-Pak & Disterhoft, 2007).

During classical conditioning, a CS activates its sensory representation S before the drive representation D is activated by an unconditioned simulus (US), or other previously conditioned reinforcer CSs. If it is appropriately timed, such pairing causes learning at the adaptive weights within the S → D pathway. The ability of the CS to subsequently activate D via this learned pathway is one of its key properties as a conditioned reinforcer. As these S → D associations are being formed, incentive motivational learning within the D → S incentive motivational pathway also occurs, due to the same pairing of CS and US. Incentive motivational learning enables an activated drive representation D to prime, or modulate, the sensory representations S of all cues, including the CSs, that have consistently been correlated with it. That is how activating D generates a “motivational set”: it primes all of the sensory and cognitive representations that have been associated with that drive in the past. These incentive motivational signals are a type of motivationally-biased attention. The S → M motor, or habit, learning enables the sensorimotor maps, vectors, and gains that are involved in sensory-motor control to be adaptively calibrated, thereby enabling a CS to read-out correctly calibrated movements as a CR.

Taken together, these processes control aspects of the learning and recognition of sensory and cognitive memories, which are often classified as part of the declarative memory system (Mishkin, 1982, 1993; Squire & Cohen, 1984); and the performance of learned motor skills, which are often classified as part of the procedural memory system (Gilbert & Thatch, 1977; Ito, 1984; Thompson, 1988).

Once both conditioned reinforcer and incentive motivational learning have taken place, a CS can activate a (sensory cortex)-(amygdala)-(orbitofrontal cortex)-(sensory cortex) feedback circuit (Figs. 2 and 4c). This circuit supports a cognitive-emotional resonance that leads to core consciousness and “the feeling of what happens” (Damasio, 1999), while it enables the brain to rapidly focus motivated attention on motivationally salient objects and events. This is the first behavioral competence that was mentioned above in the Overview and scopesection. This feedback circuit could also, however, without further processing, immediately activate motor responses, thereby leading to premature responding in many situations.

We show below that this amygdala-based process is effective during delay conditioning, where the CS and US overlap in time, but not during trace conditioning, where the CS terminates before the US begins, at least not without the benefit of the adaptively timed learning mechanisms that are described in the next section. Thus, although the CogEM model can realize the first behavioral competence that is summarized above, it cannot realize the second and third competences, which involve bridging temporal gaps between CS, US, and conditioned responses (as discussed above). Mechanisms that realize the second and third behavioral competences enable the brain to learn during trace conditioning.

It is also important to acknowledge that, as reviewed above, the amygdala may have a time-limited role during aversive conditioning (Lee & Kim, 2004). As the association of eyeblink CS-US becomes more consolidated through the strengthening of direct thalamo-cortical and cortico-cortical learned associations, the role of the amygdala may become less critical.

Spectral Timing model and hippocampal time cells

The third model, called the Spectral Timing model, clarifies how the brain learns adaptively timed responses in order to acquire rewards and other goal objects that are delayed in time, as occurs during trace conditioning. Spectral timing enables the model to bridge an ISI, or temporal gap, of hundreds of milliseconds, or even seconds, between the CS offset and US onset. This learning mechanism has been called spectral timing because a “spectrum” of cells respond at different, but overlapping, times and can together generate a population response for which adaptively timed cell responses become maximal at, or near, the time when the US is expected (Grossberg & Merrill, 1992, 1996; Grossberg & Schmajuk, 1989), as has been shown in neurophysiological experiments about adaptively timed conditioning in the hippocampus (Berger & Thompson, 1978; Nowak & Berger, 1992; see also Tieu et al., 1999).

Each cell in such a spectrum reaches its maximum activity at different times. If the cell responds later, then its activity duration is broader in time, a property that is called a Weber law, or scalar timing, property (Gibbon, 1977). Recent neurophysiological data about “time cells” in the hippocampus have supported the Spectral Timing model prediction of a spectrum of cells with different peak activity times that obey a Weber law. Indeed, such a Weber law property was salient in the data of MacDonald et al. (2011), who wrote: “…the mean peak firing rate for each time cell occurred at sequential moments, and the overlap among firing periods from even these small ensembles of time cells bridges the entire delay. Notably, the spread of the firing period for each neuron increased with the peak firing time…” (p. 3). MacDonald et al. (2011) have hereby provided direct neurophysiological support for the prediction of spectral timing model cells (“small ensembles of time cells”) that obey the Weber law property (“spread of the firing period…increased with the peak firing time”).

To generate the adaptively timed population response, each cell's activity is multiplied, or gated, by an adaptive weight before the memory-gated activity adds to the population response. During conditioning, each weight is amplified or suppressed to the extent to which its activity does, or does not, overlap times at which the US occurs; that is, times around the ISI between CS and US. Learning has the effect of amplifying signals from cells for which timing matches the ISI, at least partially. Most cell activity intervals do not match the ISI perfectly. However, after such learning, the sum of the gated signals from all the cells – that is, its population response – is well-timed to the ISI, and typically peaks at or near the expected time of US onset. This sort of adaptive timing endows the nSTART model with the ability to learn associations between events that are separated in time, notably between a CS and US during trace conditioning.

Evidence for adaptive timing has been found during many different types of reinforcement learning. For example, classical conditioning is optimal at a range of inter-stimulus intervals between the CS and US that are characteristic of the task, species, and age, and is typically attenuated at zero ISI and long ISIs. Within an operative range, learned responses are timed to match the statistics of the learning environment (e.g., Smith, 1968).

Although the amygdala has been identified as a primary site in the expression of emotion and stimulus-reward associations (Aggleton, 1993), as summarized in Figs. 2 and 5, the hippocampal formation has been implicated in the adaptively timed processing of cognitive-emotional interactions. For example, Thompson et al. (1987) distinguished two types of learning that go on during conditioning of the rabbit Nictitating Membrane Response: adaptively timed “conditioned fear” learning that is linked to the hippocampus, and adaptively timed “learning of the discrete adaptive response” that is linked to the cerebellum. In particular, neurophysiological evidence has been reported for adaptive timing in entorhinal cortex activation of hippocampal dentate and CA3 pyramidal cells (Berger & Thompson, 1978; Nowak & Berger, 1992) to which the more recently reported “time cells” presumably contribute.

Spectral timing has been used to model challenging behavioral, neurophysiological, and anatomical data about several parts of the brain: the hippocampus to maintain motivated attention on goals for an adaptively timed interval (Grossberg & Merrill, 1992, 1996; cf. Friedman, Bressler, Garner, & Ziv, 2000), the cerebellum to read out adaptively timed movements (Fiala, Grossberg, & Bullock, 1996; Ito, 1984), and the basal ganglia to release dopamine bursts and dips that drive new associative learning in multiple brain regions in response to unexpectedly timed rewards and non-rewards (Brown, Bullock, & Grossberg, 1999, 2004; Schultz, 1998; Schultz et al., 1992).

Distinguishing expected and unexpected disconfirmations

Adaptive timing is essential for animals that actively explore and learn about their environment, since rewards and other goals are often delayed in time relative to the actions that are aimed at acquiring them. The brain needs to be dynamically buffered, or protected against, reacting prematurely before a delayed reward can be received. The Spectral Timing model accomplishes this by predicting how the brain distinguishes expected non-occurrences, also called expected disconfirmations, of reward, which should not be allowed to interfere with acquiring a delayed reward, from unexpected non-occurrences, also called unexpected disconfirmations, of reward, which can trigger the usual consequences of predictive failure, including reset of working memory, attention shifts, emotional rebounds, and the release of exploratory behaviors. In the nSTART model, and the START model before it, spectral timing circuits generate adaptively timed hippocampal responses that can bridge temporal gaps between CS and US and provide motivated attention to maintain activation of the hippocampus and neocortex between those temporal gaps (Figs. 2 and 6).

Fig. 6.

Fig. 6

In the START model, conditioning, attention, and timing are integrated. Adaptively timed hippocampal signals R maintain motivated attention via a cortico-hippocampal-cortical feedback pathway, at the same time that they inhibit activation of orienting system circuits A via an amygdala drive representation D. The orienting system is also assumed to occur in the hippocampus. The adaptively timed signal is learned at a spectrum of cells whose activities respond at different rates r j and are gated by different adaptive weights z ij . A transient Now Print learning signal N drives learned changes in these adaptive weights. In the nSTART model, the hippocampal feedback circuit operates in parallel to the amygdala, rather than through it [Reprinted with permission from Grossberg and Merrill (1992)]

What spares an animal from erroneously reacting to expected non-occurrences of reward as predictive failures? Why does an animal not immediately become so frustrated by the non-occurrence of such a reward that it prematurely shifts its attentional focus and releases exploratory behavior aimed at finding the desired reward somewhere else, leading to relentless exploration for immediate gratification? Alternatively, if the animal does wait, but the reward does not appear at the expected time, then how does the animal then react to the unexpected non-occurrence of the reward by becoming frustrated, resetting its working memory, shifting its attention, and releasing exploratory behavior?

Any solution to this problem needs to account for the fact that the process of registering ART-like sensory matches or mismatches is not itself inhibited (Fig. 3): if the reward happened to appear earlier than expected, the animal could still perceive it and release consummatory responses. Instead, the effects of these sensory mismatches upon reinforcement, attention, and exploration are somehow inhibited, or gated off. That is, a primary role of such an adaptive timing mechanism seems to be to inhibit, or gate, the mismatch-mediated arousal process whereby a disconfirmed expectation would otherwise activate widespread signals that could activate negatively reinforcing frustrating emotional responses that drive extinction of previous consummatory behavior, reset working memory, shift attention, and release exploratory behavior.

The START model unifies networks for spectrally timed learning and the differential processing of expected versus unexpected non-occurrences, or disconfirmations (Fig. 6). In START, learning from sensory cortex to amygdala in Si → D pathways is supplemented by a parallel Si → H hippocampal pathway. This parallel pathway embodies a spectral timing circuit. The spectral timing circuit supports adaptively timed learning that can bridge temporal gaps between cues and reinforcers, as occurs during trace conditioning. As shown in Fig. 6, both of these learned pathways can generate an inhibitory output signal to the orienting system A. As described within ART (Fig. 3c), the orienting system is activated by novelty-sensitive mismatch events. Such a mismatch can trigger a burst of nonspecific arousal that is capable of resetting the currently active recognition categories that caused the mismatch, while triggering opponent emotional reactions, attention shifts, and exploratory behavioral responses. The inhibitory pathway from D to A in Fig. 6 prevents the orienting system from causing these consequences in response to expected disconfirmations, but not to unexpected disconfirmations (Grossberg & Merrill, 1992, 1996). In particular, read-out from the hippocampal adaptive timing circuit activates D which, in turn, inhibits A. At the same time, adaptively timed incentive motivational signals to the prefrontal cortex (pathway D → Si (2) in Fig. 6) are supported by adaptively timed output signals from the hippocampus that help to maintain motivated attention, and a cognitive-emotional resonance for a task-appropriate duration.

Thus, in the START model, two complementary pathways are proposed to control spectrally-timed behavior: one excites adaptively-timed motivated attention and responding, and the other inhibits orienting responses in response to expected disconfirmations. Adaptively-timed motivated attention is mediated through an inferotemporal-amygdala-orbitofrontal positive feedback loop in which conditioned reinforcer learning and incentive motivational learning work together to rapidly focus attention upon the most salient cues, while blocking recognition of other cues via lateral inhibition (see Figs. 5 and 6). The hippocampal adaptive timing circuit works in parallel to maintain activity in this positive feedback loop and thereby focus motivated attention on salient cues for a duration that matches environmental contingences.

nSTART model

The nSTART model builds upon, extends, and unifies the ART, CogEM, and START models in several ways to explain data about normal and abnormal learning and memory. First, nSTART incorporates a simplified model hippocampus and adaptively timed learning within the model's thalamo-hippocampal and cortico-hippocampal connections (Fig. 2). Second, nSTART incorporates a simplified version of ART category learning in its bottom-up cortico-cortical connections. Third, learning in these connections, and in the model's hippocampo-cortical connections, is modulated by a simple embodiment of BDNF. Fourth, the sensory cortical and orbitofrontal cortical processing stages habituate in an activity-dependent way, a property that has previously been used to model other cortical development and learning processes, such as the development of visual cortical area V1 (e.g., Grossberg & Seitz, 2003; Olson & Grossberg, 1998).

The nSTART model focuses on amygdala and hippocampal interactions with the sensory cortex and orbitofrontal cortex during conditioning (Figs. 2 and 6), with the hippocampus required to support learning and memory consolidation, especially during learning experiences such as trace conditioning wherein a temporal gap between the associated stimuli needs to be bridged, as described above. Consolidation is enabled, in the brain and in the model, by a self-organizing process whereby active neurons and specific neural connections are reinforced and strengthened through positive feedback.

BDNF-mediated hippocampal activation is proposed to maintain and enhance cortico-cortical resonances that strengthen and stabilize partial learning based on previously experienced bottom-up sensory inputs. This partial learning occurs during conditioning trials within the bottom-up adaptive filters that activate learned recognition categories, and within the corresponding top-down expectations. After the consolidation process strengthens these pathways, the hippocampus is no longer required for performance of CRs, but rather the prefrontal cortex takes on a critical role in generating successful performance of the CR in concert with the associated thalamic sensory input (Takehara et al., 2003) and amygdala-driven motivational support. Since amygdala and prefrontal cortex provide input to the pontine nuclei, their collective activity there reflects the salience of the CS in generating a trace CR (Siegel et al., 2012; Siegel et al., 2015). The prefrontal cortex interacts with the cerebellum via the pontine nucleus to directly mediate adaptively timed conditioned responses (Weiss & Disterhoft, 2011; Woodruff-Pak & Disterhoft, 2007). A detailed biochemical model of how the cerebellum learns to control adaptively timed conditioned responses is developed in Fiala, Grossberg, and Bullock (1996), with the Ca++-modulated metabotropic glutamate receptor (mGluR) system playing a critical role in enabling temporal gaps to be bridged via a spectral timing circuit.

Linking consciousness, conditioning, and consolidation

The nSTART model traces the link between consciousness and conditioning to cognitive-emotional resonances that are sustained long enough to support consciousness. Such cognitive-emotional resonances maintain core consciousness (Damasio, 1999) and the ability to make responses, somatosensory responses in the case of eyeblink conditioning, that depend on interactions between the sensory cortex and orbitofrontal cortex, or thalamus and medial prefrontal cortex (Powell & Churchwell, 2002 ). The nSTART model proposes that, when the hippocampus is removed, and with it the capacity to sustain a temporally prolonged cognitive-emotional resonance and adaptively timed focusing of motivated attention upon cognitively relevant information, then core consciousness and performance may be impaired. The model hereby explains how interactions among the thalamus, hippocampus, amygdala, and cortex may support the conscious awareness that is needed for trace conditioning, but not delay conditioning (Clark & Squire, 1998).

As explained by the model, memory consolidation during trace conditioning builds upon cooperative interactions among several different neural pathways in which learning takes place during trace conditioning trials. Consider the case of the circuits in Figs. 4 and 5, for example. A property of the CogEM model, which is supported by neurophysiological data, as summarized below, is that the (sensory cortex)→(orbitofrontal cortex) pathway, by itself, is not able to initiate efficient conditioning. Motivational support is needed as well. How this is proposed to occur is illustrated by considering what would happen if the sensory cortex and prefrontal cortex were lumped together, as in Fig. 4a. Then, after a reinforcing cue activated a sensory representation S, it could activate a motor representation M at the same time that it also sent conditioned reinforcer signals to a drive representation D such as the amygdala. As a result, a motor response could be initiated before the sensory representation received incentive motivational feedback to determine whether the sensory cue should generate a response at that time. For example, eating behavior might be initiated before the network could determine if it was hungry.

This deficiency is corrected by interactions between a sensory cortex and its prefrontal, notably orbitofrontal, cortical projection, as in Fig. 4b and its anatomical interpretation in Fig. 5. Here, the various sensory cortices play the role of the first cortical stage S(1)CS of the sensory representations, the orbitofrontal cortex plays the role of the second cortical stage S(2)CS of the sensory representations, and the amygdala and related structures play the role of the drive representations D. This two-stage sensory representation overcomes the problem just mentioned by assuming that each orbitofrontal cell obeys a polyvalent constraint whereby it can fire vigorously only if it receives input from its sensory cortex and from a motivational source such as a drive representation. This polyvalent constraint on the model prefrontal cortex prevents this region from triggering an action until it gets incentive feedback from a motivationally-consistent drive representation (Grossberg, 1971, 1982). More specifically, presentation of a given cue, or CS, activates the first stage S(1)CS of its sensory representation (in sensory cortex) in Fig. 4b. This activation is stored in short-term memory using positive feedback pathways from the sensory representation to itself. The stored activity generates output signals to all the drive representations with which the sensory representation is linked, as well as to the second stage S(2)CS of the sensory representation (in prefrontal cortex). The second stage S(2)CS obeys the polyvalent constraint: It cannot fire while the CS is stored in short-term memory unless it receives converging signals from the first sensory stage (via the S(1)CS → S(2)CS pathway) and from a drive representation (via the S(1)CS → D → S(2)CS pathway).

Early in conditioning, a CS can activate its representation S(1)CS in the sensory cortex, but cannot vigorously activate its representation S(2)CS in the orbitofrontal cortex, or a drive representation D in the amygdala. A US can, however, activate D. When the CS and US are paired appropriately through time, the conditioned reinforcer adaptive weights in the S(1)CS → D pathway can be strengthened. The converging CS-activated inputs from S(1)CS and US-activated inputs from D at S(2)CS also enable the adaptive weights in the incentive motivational pathway D → S(2)CS to be strengthened. After conditioning, during retention testing when only the CS is presented, the two pathways S(1)CS → S(2)CS and S(1)CS → D → S(2)CS can supply enough converging input to fire the orbitofrontal representation S(2)CS without the help of the US.

These properties are consistent with the following anatomical interpretation. The amygdala and related structures have been identified in both animals and humans to be a brain region that is involved in learning and eliciting memories of experiences with strong emotional significance (Aggleton, 1993; Davis, 1994; Gloor et al., 1982; Halgren, Walter, Cherlow, & Crandall, 1978; LeDoux, 1993). The orbitofrontal cortex is known to be a major projection area of the ventral or object-processing cortical visual stream (Barbas, 1995, 2007; Fulton, 1950; Fuster, 1989; Rolls, 1998; Wilson, Scalaidhem, & Goldman-Rakic, 1993). Cells in the orbitofrontal cortex are sensitive to the reward associations of sensory cues, as well as to how satiated the corresponding drive is at any time (e.g., Mishkin & Aggleton, 1981; Rolls, 1998, 2000). The feedback between the prefrontal and sensory cortical stages may be interpreted as an example of the ubiquitous positive feedback that occurs between cortical regions including prefrontal and sensory cortices (Felleman & Van Essen, 1991; Höistad & Barbas, 2008; Macchi & Rinvik, 1976; Sillito, Jones, Gerstein, & West, 1994; Tsumoto, Creutzfeldt, & Legéndy, 1978; van Essen & Maunsell, 1983). In CogEM, it provides a top-down ART attentional priming signal that obeys the ART Matching Rule. Finally, the CogEM, and nSTART, models are consistent with data suggesting that the ventral prefrontal cortex and the amygdala are involved in the process by which responses are selected on the basis of their emotional valence and success in achieving rewards (Damasio, Tranel, & Damasio, 1991; Passingham, 1997). In particular, Fuster (1989) has concluded from studies of monkeys that the orbitofrontal cortex helps to suppress inappropriate responses. These monkey data are consistent with clinical evidence that patients with injury to orbitofrontal cortex tend to behave in an inappropriate manner (Blumer & Benson, 1975; Liddle, 1994).

Bridging the temporal gap: The hippocampus does this, not the amygdala

The need to regulate orbitofrontal outputs using drive information puts into sharp relief the problem that the brain needs to solve in order to be capable of trace conditioning, or indeed of any learning wherein there is a temporal gap between the stimuli that need to be associated: If the amygdala cannot bridge the temporal gap between CS and US during trace conditioning, what can? If there were no structure capable of bridging that gap, then either the motivational appropriateness of responding would be sacrificed, or the ability to learn across temporal gaps. As briefly noted above, the nSTART model proposes how the brain solves this problem by using the hippocampus to bridge the temporal gap, using spectrally timed learning and BDNF processes in connections from thalamus and sensory cortex to the hippocampus, combined with learned incentive motivational processes and BDNF in connections from the hippocampus to the neocortex (Fig. 2).

Initially, during trace conditioning, the ISI between the CS and US is too large to be bridged by either the direct (sensory cortex)→(orbitofrontal cortex) pathway or by the indirect (sensory cortex)→(amygdala)→(orbitofrontal cortex) pathway. In other words, by the time the US becomes active, CS-activated signals from the sensory cortex to the amygdala and the orbitofrontal cortex have significantly decayed, so that they cannot strongly drive associative learning between simultaneously active CS and US representations. In contrast, in the manner explicated by the model, the greater persistence afforded by hippocampal adaptive timing enables CS-activated signals via the hippocampus to bridge this ISI. Then, when paired with the US, which can activate its own sensory cortical and orbitofrontal cortical representations, CS-activated associations can begin to form in the (sensory cortex)→(hippocampus)→(orbitofrontal cortex) pathway, and can support feedback from orbitofrontal cortex to the CS representation in sensory cortex, thereby enabling a sustained cognitive-emotional resonance that can support conscious awareness. Model hippocampal neurotrophins extend this temporal interval and enhance the strength of these effects. Once both the sensory cortex and orbitofrontal cortex are simultaneously active, associations can also start to form directly from the CS-activated object category representation in the sensory cortex to the orbitofrontal cortex, thereby consolidating the learned categorical memory that associates an object category with an object-value category. As these direct connections consolidate, the hippocampus becomes less important in controlling behaviors that are read out from orbitofrontal cortical sites.

After partial conditioning gets learning started in associated thalamo-cortical and cortico-cortical pathways, during the memory consolidation process, hippocampal adaptively timed circuits, and even beyond that, BDNF activity, persist and support resonating cortico-cortical and cortico-hippocampo-cortical activity. The polyvalent constraint on the firing of orbitofrontal cells is therefore achieved even after learning trials cease. Without hippocampal support after partial conditioning, this cannot occur. The model suggests that this is why early, but not late, hippocampal lesions interfere with the formation and consolidation of conditioned responses.

Model description

nSTART model overview

The nSTART model is here described in terms of the processing stages that are activated during a conditioning trial, and the functional role of each stage is explained. Fig. 2 illustrates the model as a macrocircuit. Figure 7 shows a set of diagrams that summarize the processing steps and relationships among the model variables. Below they are combined to form a complete circuit diagram (Fig. 18) for which mathematical equations and parameters are also specified. Model parameters have the same values for all simulations except where modifications have been made to simulate lesions or different US levels.

Fig. 7.

Fig. 7

The processing steps for a conditioning trial in the nSTART model are illustrated. Conditioned variables that represent learning are not reset to zero between trials in order to simulate inter-trial learning. These include adaptive weights w Si, w Ai, w Hi, F i, and z ij; and hippocampal and orbitofrontal brain-derived neurotrophic factor (BDNF) B H and B Oi, respectively. (a) External stimuli, I i, activate sensory representations in the sensory cortex S i via the thalamus T i. Orbitofrontal cortical activity O i generates a top-down excitatory feedback signal back to S i. The total excitatory signal, including this positive feedback, is gated by the habituative transmitter gate S mi. (b) Excitatory inputs to orbitofrontal cortex from sensory cortex (S i), amygdala (A), and hippocampus (H) are gated by learned presynaptic weights (w Si, w Ai, and w Hi, respectively). An example of this processing is shown in Fig. 7c. Orbitofrontal BDNF (B Oi) extends the duration of O i activity. The total excitatory signal, including positive feedback, is gated by the habituative transmitter gate O mi. (c) The learned weight w Si from sensory cortex to orbitofrontal cortex is modulated by orbitofrontal and BDNF signals. (d) Amygdala (A) receives inputs from sensory cortex (S i) that are gated by conditioned reinforcer adaptive weights (F i). The transient Now Print signal (N) that drives the learning of adaptively timed hippocampal responses is the difference between the excitatory signal from amygdala (A) and an inhibitory signal from a feedforward amygdala-activated inhibitory interneuron (E), which time-averages amygdala activity. (e) Sensory cortical (S i) inputs to hippocampus (H) learn to adaptively time (z ij) the inter-stimulus interval (ISI) using the Now Print signal (N) to drive learning within a spectral timing circuit. The cells in the spectral timing circuit react to sensory cortical (S i) inputs at 20 different rates that are subscripted with j. The resulting activations (x ij) generate sigmoidal output signals (f(x ij)). These outputs are multiplied by their habituative transmitter gates (y ij) to produce an activation spectrum (g ij) which determines the rate at which the adaptive weights (z ij) learn from N. The z ij multiply the g ij to generate net outputs h ij that are added to generate an adaptively timed population input (R) to hippocampus (H). R also regulates hippocampal BDNF (B H), which further extends hippocampal activity through time. H also supports production of orbitofronal BDNF (B Oi). (f) Hippocampal BDNF (B H) is an indirect promoter of the production of cortical BDNF (B Ci) through its excitatory effect on the activity H. (g) Pontine nuclei (P) are excited by amygdala (A) and orbitofrontal cortex (O) and are the model’s final common pathway for generating a CR. These processing components are combined in Fig. 18

Fig. 18.

Fig. 18

Interacting thalamic, prefrontal cortical, amygdala, and hippocampal processing circuits control adaptively timed responses in conditioning acquisition and maintenance. The circuit diagram is a composite of the macrocircuit structure given in Fig. 2 and the processing detail given in Fig. 7. The text contains the mathematical definitions of the circuit variables

For each trial, conditioning variables are simulated from 1 to 2,000 ms. Three types of trials simulate the learning of conditioning contingencies: acquisition or training (CS-US pairing), retention or testing (CS only), and no stimulus (neither CS nor US) in order to extend the time between the last training trial and the testing trial. Between any two trials, process variables are either reset to initial values, or not, depending on their functional role. There are two types of process variables: one for intra-trial process dynamics (these variables are reset for each trial), and one for inter-trial cumulative learning (these variables are not reset for each trial). Cumulative learning variables are identified below in the discussion of the functional role of each process. See Table 2 for a list of all variables.

Table 2.

nSTART: system equations, variables, and parameters

System equation Variable Value
(2) Sensory Cortical Dynamics S i Initial value = 0
β S 25
Conditioned Stimulus I 1 = 1
Unconditioned Stimulus I 0 = 1, 2 or 4
f S(S i) See Equation 4
O i See Equation 7
S mi See Equation 6
(3) Thalamic Dynamics T i S i; See Equation 2
(4) Signal Functions in the Recurrent On-Center Off-Surround Network f S(S i) Initial value = 0
max(S i ─ 0.02)
(5) Habituative Transmitter Gates N mi Initial value = 1
For sensory cortex (S mi), see Equation 6. For prefrontal cortex (O mi), see Equation 13.
(6) Habituative Transmitter Gates: Sensory Cortex S mi Initial value = 1
See Equation 2
(7) Corticocortical Category Learning O i Initial value = 0
β O 12.5
f S(S i) See Equation 4
w Si, w Ai, w Hi See Equation 5
A See Equation 14
H See Equation 16
B Oi See Equation 12
O mi See Equation 13
(8) Prefrontal Cortical Dynamics:
Conditioned Weights at Cortical Synapse (M = S (sensory cortex), A (amygdala) and H (hippocampus))
w Mi Initial values = 0.01
No inter-trial reset.
f M(M) If M=S, see Equation 9;
if M=A, see Equation 10;
if M=H, see Equation 11.
B Oi See Equation 12
O i See Equation 7
(9) Prefrontal Cortical Dynamics:
Conditioned Weights at Cortical Synapse for sensory cortex)
w Si Initial value = 0.01
f S(S i) See Equation 4
B Oi See Equation 12
O i See Equation 7
(10) Prefrontal Cortical Dynamics:
Conditioned Weights at Cortical Synapse for Amygdala
w Ai Initial value = 0.01
A See Equation 14
B Oi See Equation 12
O i See Equation 7
(11) Prefrontal Cortical Dynamics:
Conditioned Weights at Cortical Synapse for Hippocampus
w Hi Initial value = 0.01
H See Equation (16)
B Oi See Equation 12
O i See Equation 7
(12) Cortical BDNF B Oi Initial value = 0
No inter-trial reset.
H See Equation 16
w Hi See Equation 11
(13) Habituative Transmitter Gates: Prefrontal Cortex O mi Initial value = 1
See Equation 7
(14) Amygdala Drive Representation Dynamics A Initial value = 0
β A 40
f S(S i) See Equation 4
F i See Equation 15
(15) Conditioned Reinforcer Learning F i constant value F 0 = 0.50,
Initial value F 1 = 0.05.
No inter-trial reset.
f S(S i) See Equation 4
A See Equation 8
(16) Adaptively-Timed Hippocampal Activity H Initial value = 0
β H 5
R See Equation 19
B H See Equation 27
(17) Adaptively-Timed Population Output Signal R
h ij See Equation 18
(18) Doubly Gated Signal Spectrum (timed responses) h ij Initial value = 0
f(x ij ) See Equation 19
y ij See Equation 22
z ij See Equation 24
(19) Sigmoidal Signal Processing f(x ij ) Initial value = 0
(20) Activation Spectrum x ij Initial value = 0
r j See Equation 21
f S(S i) See Equation 19
(21) Differential Rates of Spectral Timing r j Range from 0.016 to 0.171
j Vary from 1 to 20
(22) Habituative Transmitter Spectrum y ij Initial value = 1
f(x ij ) See Equation 19
(23) Gated Signal Spectrum g ij Initial value = 0
f(x ij ) See Equation 19
y ij See Equation 22
(24) Spectral Learning Law z ij Initial value = 0.
No inter-trial reset.
g ij See Equation 23
N See Equation 25
(25) Now Print Signal N Initial value = 0
A See Equation 14
E See Equation 26
(26) Inhibitory Interneuron E Initial value = 0
(27) Hippocampal BDNF A See Equation 14
B H Initial value = 0.
No inter-trial reset.
R See Equation 17
(28) Pontine Nuclei P Initial value = 0
A See Equation 14
O 1 See Equation 7

Sensory cortex and thalamus

Sensory cortical dynamics

The dynamics of sensory cortex were simulated (Fig. 2). Thalamic activity was set equal to the resultant sensory cortical activity, for computational simplicity. CS and US inputs are labeled I 1 and I 0, respectively. Input I i activates the i thsensory cortical cell, i = 0 or 1. The inputs are turned on and off through time by presentation and termination of a CS input (I 1) or US input (I 0), and are defined by a saturating function I = f(σ) = 16σ/(1+3σ) of an external stimulus intensity σ.

Sensory cortex cell activities S i compete for a limited capacity of activation via a recurrent on-center off-surround network of cells that obey membrane, or shunting equations. (see Eqs. 1 and 2 below). These recurrent interactions use a nonlinear signal function (see Eq. 4) that contrast-enhances network activity patterns and sustains the contrast-enhanced activities in short-term memory after the input pattern ends. In addition to the bottom-up input I i and the recurrent on-center interactions, excitatory inputs include a top-down attentional signal O i from object-value categories in the orbitofrontal cortex. This feedback pathway closes a bottom-up/top-down feedback loop between sensory cortex and orbitofrontal cortex and gain-amplifies cortico-cortical activity (see Eq. 7).

A habituative transmitter gate S mi multiplies the total excitatory input and is inactivated by it in an activity-dependent way, thereby preventing unlimited perseverative activation of the cortico-cortical excitatory feedback loop (see Eq. 6). This gate can be realized in several ways, one being a presynaptic chemical transmitter that is released by axonal signals, and the other a postsynaptic membrane current. The orbitofrontal cortical cells have an analogous habituative process (see Eq. 13). When all these processes interact, a brief input can trigger sustained cortical activity via the recurrent on-center, modulated by orbitofrontal attentional feedback, until it habituates in an activity-dependent way, or is reset by recurrent competitive interactions.

Signal functions in the recurrent on-center off-surround network

In order to suppress noise in the system and contrast enhance cell activity, the signal function f S(S i) in the recurrent on-center off-surround network is faster-than-linear (Grossberg, 1973, 1980), with a firing threshold that is larger than the passive equilibrium point, and grows linearly with cell activity above threshold (see Eq. 4).

Habituative transmitter gates

The habituative transmitter gate at each sensory cortical cell accumulates at a constant rate up to a maximum value, and is inactivated at a rate proportional to the size of the excitatory signal that it gates, multiplied by the amount of available transmitter (see Eq. 6; Abbott et al., 1997; Grossberg, 1968b, 1972b, 1980).

Orbitofrontal cortex, category learning, and incentive motivational learning

Orbitofrontal cortical dynamics

Sensory cortical activity S 1 can generate excitatory signals to cells with orbitofrontal cortical activity O 1. As in the sensory cortex, orbitofrontal cortical cells compete via a recurrent on-center off-surround network the cells of which obey the membrane, or shunting, equations of physiology. These recurrent dynamics enable orbitofrontal cortical activity to contrast-normalize and contrast-enhance its inputs, and for cell activities that win the competition to persist in short-term memory after inputs terminate. Finally, again as in the model sensory cortex, the total excitatory input to prefrontal cortical cells can habituate in an activity-dependent way (see Eq. 13).

Cortical category learning and incentive motivational learning

Adaptive weights w S1 exist in the pathway from CS-activated sensory cortex to orbitofrontal cortex, and may be strengthened by the conditioning process. These adaptive weight changes constitute the model's category learning process, and are critical events that enable conditioned responding to occur after sufficient memory consolidation occurs, so that hippocampal support is no longer required.

Before conditioning occurs, when a CS is presented it can activate its sensory representation, and sends signals to its orbitofrontal representation, the amygdala, and the hippocampus. However, before conditioning occurs, these signals cannot vigorously activate other regions of the model network. When the US occurs, it can activate its own sensory and orbitofrontal cortical representations, as well as the amygdala and hippocampus. Incentive motivational signals from the amygdala and hippocampus can then be broadcast nonspecifically to many orbitofrontal cortical cells, including those that receive signals from the CS. The hippocampal incentive motivational signals last longer than the amygdala signals because of their capacity for adaptively-timed responding across long ISIs, as will be noted below. Only those orbitofrontal cortical cells that receive a simultaneous combination of CS-activated and US-activated signals can start to vigorously fire.

When O 1 becomes active at the same time that signals from S 1, are active, the adaptive weight w S1 in the corresponding category learning pathway to orbitofrontal cortex (see Eq. 9) can grow. Category learning enables a CS to activate an orbitofrontal representation that can release conditioned responses further downstream. As in the START model, the sensory cortex (see Eq. 2), amygdala (Eq. 14), and hippocampus (Eq. 16) all play a role in this cortico-cortical category learning process, during which incentive motivational learning from both the amygdala and the hippocampus to the orbitofrontal cortex also takes place, with adaptive weights w Ai and w Hi in the corresponding pathways.

After being gated by its adaptive weight w S1, a sensory cortical input to an orbitofrontal cell is multiplicatively modulated, or gated, by the sum of amygdala, hippocampal, and BDNF incentive motivational signals (A, H and B O, respectively). As noted above, when these converging signals are sufficiently large at the beginning of conditioning, O 1 can become active, so all three types of adaptive weights abutting the prefrontal cortical cell, from sensory cortex, amygdala, and hippocampus (w Si, w Ai, w Hi), can be conditioned if their input sources are also active at these times (see Fig. 7b and c). In situations where the ISI is large, as during trace conditioning, the incentive motivational signal from the hippocampus may be large, even if the signal from the amygdala is not.

As explained below, the hippocampus can maintain its activity for an adaptively-timed duration that can span a long trace interval. In addition, BDNF at the hippocampus B H and orbitofrontal cortex B Oi can sustain prefrontal cortical activity for an even longer duration. This action of BDNF captures in a simplified way how BDNF-modulated hippocampal bursting is maintained during memory consolidation.

These adaptive weights all obey an outstar learning law (Grossberg, 1968a, 1969, 1980). In the incentive motivational pathways from amygdala and hippocampus, learning is gated on and off by a sampling signal that grows with amygdala or hippocampal activity, plus BDNF activity (see Eqs. 10 and 11). When the sampling signal is on, it determines the rate at which the corresponding adaptive weight time-averages activity O 1, thereby combining both Hebbian and anti-Hebbian learning properties.

Orbitofrontal BDNF

Orbitofrontal BDNF B Oi (see Eq. 12) slowly time-averages the level of hippocampal activity H, and thereby extends its duration. Hereby this BDNF process helps to maintain cortical activity across an extended CS-US temporal gap during trace conditioning, and thus to support the consolidation of cortico-cortical category learning.

Habituative transmitter gates

As described above, the habituative transmitter gate at each cortical cell prevents unlimited perseverative activation of orbitofrontal cortical cells via their positive feedback loops. As before, such a habituative transmitter gate accumulates at a constant rate up to a maximum value, and is inactivated at a rate proportional to the size of the excitatory signal that it gates, multiplied by the amount of available transmitter (see Eq. 5).

Amygdala and conditioned reinforcer learning

Amygdala drive representation dynamics

The amygdala has a complex cytotonic architecture that represents emotional states and generates incentive motivational signals (Aggleton & Saunders, 2000). The amygdala is simplified in nSTART to enable conditioned reinforcer learning and incentive motivation learning to occur, as in the CogEM and START models (see Fig. 6). In the nSTART model, a single drive representation of amygdala activity A (see Eq. 14) is activated by the sum of excitatory inputs from sensory cortex S i that are gated by conditioned reinforcer adaptive weights.

Conditioned reinforcer learning

These adaptive weights determine how well sensory cortex can activate A. Conditioned reinforcer learning is a key step in converting a conditioned stimulus into a conditioned reinforcer that can activate the amygdala. Together with incentive motivational learning in the pathway from the amygdala to the orbitofrontal cortex, a sensory cortical input can stimulate the amygdala which, in turn, can provide motivational support to fire orbitofrontal cortical cells (Fig. 2).

The CS cannot strongly excite the drive representation activity A before conditioning takes place. During conditioning, the US can directly activate A via its sensory representation. Pairing of CS-activated signals from the sensory cortex to the amgydala with those of the US to the amygdala causes conditioned reinforcer learning in the adaptive weights within the sensory cortex-to-amygdala pathways.

As in the case of incentive motivational learning, the learning law that is used for conditioned reinforcer learning is an outstar learning law (see Eq. 15) whereby a sensory cortical representation can sample and learn a spatial pattern of conditioned reinforcer adaptive weights across multiple drive representations. The current model simulations only consider such learning at a single drive representation.

Hippocampus and adaptive timing

Adaptively-timed hippocampal learning

As noted above, the hippocampus receives adaptively timed inputs that can maintain its activity for a duration that can span the trace interval. The hippocampus can hereby provide its own incentive motivational pathway to orbitofrontal cortical cells in cases when the amygdala cannot. In addition, BDNF at the model hippocampus and prefrontal cortex can sustain prefrontal cortical activity for an even longer duration. The adaptively timed “spectral timing” process spans several processing steps.

Adaptively-timed hippocampal activity

The adaptively timed signal R and the hippocampal BDNF signal B H together maintain activity of the model hippocampus (see Eq. 16) across trace conditioning intervals, and also during periods after partial conditioning when no further external inputs are presented. In these latter periods, sustained hippocampal activity provides the incentive motivational signals that support memory consolidation of cortico-cortical category learning.

Figure 7f shows the functional relationships between hippocampal BDNF (B H), hippocampal activity (H), the hippocampal-to-orbitofrontal learned weight (w Hi), and the hippocampal-to-orbitofrontal stimulation of cortical BDNF (B Oi) production.

Adaptively-timed population output signal

The adaptively timed input from the sensory cortex to the hippocampus is the population output R=i,jhijof spectrally-timed and learning-gated signals h ij = 8f(x ij)y ij z ij (see Eq. 17). The individual signals h ij are not well timed, but the population response R is, and its activity peaks around the ISI. Adaptively timed learning is thus an emergent property of this entire population of cell sites.

Activation spectrum

The components of the adaptively timed signal R are defined as follows: First, a population of hippocampal cell sites with activities x ij (see Eq. 20) reacts to the excitatory input signal from sensory cortex at a spectrum of rates, ranging from fast to slow, that span the different ISIs to be learned. Activity x ij generates a sigmoidal output signal f(x ij) to the next processing stage.

Habituative transmitter spectrum

Each signal f(x ij) is gated by with a habituative transmitter gate y ij (see Eq. 22) that is similar in structure and function to the habituative transmitter gates described above. The different rates at which each spectral activity f(x ij) responds causes the corresponding habituative transmitter y ij to habituate at a different rate. Habituative transmitter y ij multiplies, or gates, the corresponding signal f(x ij) to generate a net output signal g ij (see Eq. 23).

Gated signal spectrum and time cells

Multiplication of the increasing f(x ij) with the decreasing y ij generates a unimodal curve g ij = f(x ij)z ij through time. Each g ij peaks at a different time, and curves that peak at later times have broader activation profiles through time (see Fig. 11c), thereby realizing a Weber law property. Predicted properties of these cell responses were reported in neurophysiological data about hippocampal time cells (MacDonald et al., 2011). The Spectral Timing model predicts how such time cells may be used both to bridge the long ISIs that occur during trace conditioning, and to learn adaptively timed output signals that match the timing of experienced ISIs during delay or trace conditioning. This learning is proposed to occur in the following way.

graphic file with name 13415_2016_463_Fig11_HTML.jpg

Spectral learning law

To generate the adaptively-timed response R, each signal g ij is multiplied, or gated, by a long-term memory (LTM) trace z ij (see Eq. 24). In addition, g ij helps to control learning by z ij: When g ij is positive, z ij can approach the value of a Now Print learning signal N at a rate proportional to g ij. Each z ij thus changes by an amount that reflects the degree to which the curves g ij and N, which represent sensory and reinforcement values, respectively, are simultaneously large. If g ij is large while N is large, then z ij will increase. If g ij is large while N is small, then z ij will decrease. Thus, adaptively timed learning selectively amplifies those z ij whose sampling signals g ij are on when N is on. Since the z ij represent adaptively timed learned traces that persist across trials, they are not reset to initial values between trials but rather are cumulative across trials.

Signal N is activated transiently by increments in amygdala activity, and is thus active at times when the amygdala receives either US or conditioned CS inputs. A direct excitatory output signal from amygdala (see Eq. 14) and an inhibitory signal from an amygdala-activated inhibitory interneuron E (Eq. 26) combine to compute N (Eq. 25); see also Fig. 7d. In response to larger inputs A, N increases in amplitude, but not significantly in duration. Thus, learning rate can change without undermining learned timing.

Doubly-gated signal spectrum

The adaptive weight z ij gates the sampling signal g ij to generate a twice-gated output signal h ij = 8f(x ij)y ij z ij from each of the differently timed cell sites (Eq. 18); see also Fig. 11d. Comparison of h ij with g ij in Fig. 11d shows how the population response R=i,jhij learns to match the ISI.

Hippocampal BDNF

R causes production and release of hippocampal BDNF B H (see Eq. 27). Sustained BDNF activity helps to maintain hippocampal activity even longer than R can, and thus its incentive motivational support to orbitofrontal cortex across the CS-US ISI intervals during trace conditioning and memory consolidation (Fig. 7e).

The pontine nuclei

Final common path for conditioned output

Projections from the amygdala and orbitofrontal cortex input to the pontine nuclei (Fig. 7g). Pontine activity P controls output signals that generate a CR (Kalmbach et. al., 2009; Siegel et al., 2012; Woodruff-Pak & Disterhoft, 2007; see Eq. 28).

Results

Summary of six key simulation measures

Using a single set of model parameters, except for a variable US intensity, the following measurements are used to simulate the experimental data. Where there is an intact or partial hippocampus in the simulation, the adaptively timed signal within the hippocampus, R, is used to illustrate how the hippocampus reflects CR-timed performance, as seen in many experimental data (Berger, 1984; Schmaltz & Theios, 1972; Smith, 1968; Thompson, 1988). Orbitofrontal cortical activity, O, is reported since it is involved in activating downstream conditioned motor outputs (Kalmbach et al. 2009a, 2009b; Siegel et al., 2012; Woodruff-Pak & Disterhoft, 2007); and is a critical site of long-term memory consolidation in the model (see Eq. 7). In addition, the activity of the pontine nuclei P (see Eq. 28) is reported in all cases because it serves as a common output path for CR (Kalmbach et al. 2009a, 2009b; Siegel, et al., 2012; Woodruff-Pak & Disterhoft, 2007). To understand how CR activity is generated in the pons, the activity profiles of the sensory cortex (S), amygdala (A), and hippocampus (H) are also reported.

Simulation of normal trace conditioning

Figure 8a shows behavioral data for normal trace conditioning during rabbit nictitating membrane conditioning for multiple ISIs in response to different US levels (Smith, 1968). These data exhibit the Weber law property whereby smaller ISIs generate earlier response peaks with narrower variances. The data also generally show the typical inverted-U envelope through time at each US intensity level for each ISI curve, as well as collectively for different ISI values. Finally, the data show that, whereas conditioned response timing is only sensitive to the ISI, response amplitude is also sensitive to US intensity (1, 2, and 4 MA).

Fig. 8.

Fig. 8

(a) Data showing trace conditioning data at multiple inter-stimulus intervals (ISIs) for different unconditioned stimulus (US) levels (Smith, 1968). (b) Simulation of Smith data by nSTART model is based on 20 acquisition trials per ISI for time = 1 to 2,000 ms, US level =1 (solid line), 2 (thicker solid line), and 4 (thickest solid line). The hippocampal output signal R (Eq. 17) is plotted for a retention test trial in response to the conditioned stimulus (CS) alone. Simulating qualitative properties of the data, peak amplitude of each curve is near its associated ISI of 125, 250, 500, and 1,000 ms, respectively. The model is sensitive to US intensity. (c) A comparison of the normal simulation of the Smith data in (b) using US level =1 (solid line), with simulation of two abnormal treatments: with no hippocampal brain-derived neurotrophic factor (BDNF) (dashed-line) and with no hippocampal BDNF and no cortical BDNF (dotted-line). Short ISIs show an increase in amplitude, longer ISIs show a decrease. (d) Activity in the pontine nuclei (P) for a retention test in response to the CS only: ISI = 125 ms (dotted line), ISI = 250 ms (dotted-dashed line), ISI = 500 ms (dashed line), ISI = 1,000 ms (solid line). The CS input is shown as a vertical dashed bar starting at a CS onset at 1 ms. Short ISIs (125 ms and 250 ms) do not exhibit typical pontine profiles; in vivo, very short ISIs are likely processed directly by the pons and its connection to the cerebellum. As the ISI becomes longer and a conditioned response (CR) is more reliant on the timed orbitofrontal connection to the pons, pontine activity matches the experimental data

Under the learning conditions in the Smith (1968) experiments, where a living animal has much more complex knowledge, motivation, and attentional distractions than in a computational model like nSTART, 110 trials, on each of 10 consecutive days, were completed to obtain the given CR data, which are smoothed averages of the individual trials. Smith noted that his data of “average topographies present a somewhat distorted picture of individual CRs…the later peak of the averaged response appeared to be later than the mean of the individual responses” (Smith, 1968, p.683; see Fig. 8a).

Figure 8b shows how hippocampal adaptive timing R in nSTART simulates these properties of normal conditioning on a recall trial, in response to the CS alone, after 20 prior learning trials for each ISI in response to three different US amplitudes. The peak activities and timing of both the cortex and the pontine nuclei (Fig. 8d) reflect the properties of the adaptively timed hippocampal output to them.

When orbitofrontal BDNF B O1 is eliminated after acquisition trials in model simulations, adaptive timing is impacted more negatively for longer ISIs (Fig. 8c). This learning impairment is due to a weakened cortico-cortico-hippocampal feedback loop, which is critical in trace conditioning.

nSTART is robust in that, with a single set of parameters, it can learn long ISIs better under normal conditions with additional learning trials; for example, the retention test output for ISI = 1,000 after 20 and 40 acquisition trials shows that peak R amplitude and timing changed from 0.5616 at 911 ms to 0.5393 at 949 ms, respectively. The activity profiles of the pontine nuclei are consistent with these results: P peak amplitude and timing changed from 1.311 at 639 ms, at 20 trials, to 1.689 at 601 ms, at 40 trials. These peak timings are within the effective 400-ms signaling window that has been found experimentally (Kalmbach et al. 2009a, 2009b; Siegel, et al., 2012; Woodruff-Pak & Disterhoft, 2007).

Delay conditioning with and without hippocampus

A comparison of simulations of delay conditioning after five training trials with and without hippocampal lesions (see H in Fig. 9) and indicates that an intact model hippocampus is not required for delay conditioning (see P in Fig. 9a), as also occurs typically in the data (see Table 1). The involvement of the amygdala in each case (normal, 50 % partial ablation, and 80 % partial ablation) is apparent when their peak activities are compared. While in vivo the cerebellum typically is able to learn delay conditioning without forebrain processing, the model illustrates how the amygdala may motivationally support a parallel input channel to the pontine activity found in normal delay conditioning.

Fig. 9.

Fig. 9

The hippocampus is not required for delay conditioning. (a) To simulate hippocampal lesions before any delay conditioning trials, the scalar β H in the hippocampus excitation term in Eq. 16 was progressively decreased. There were five training trials with US onset at 550 ms, US duration = 50 ms, US offset at 600 ms, and US level = 1. The results show network activations in response to a CS after training: sensory cortex (S), orbitofrontal cortex (O), hippocampus (H), amygdala (A), hippocampal adaptive timing (R), and the pontine nuclei (P). The CS is represented by vertical solid lines, the US onset during training by a vertical dashed line (in delay conditioning, the CS offset and the US offset coincide). Delay conditioning shows little change in pontine activity in the normal (solid line) versus 50 % (dashed line) and 80 % (dotted line) lesions. (b) Ten learning trials, instead of the five trials in (a), yield better learning, including at the orbitofrontal cortex

Table 1.

The specific impact to learning and memory of the conditioned response by lesions of the hippocampus, cortex, amygdala, and thalamus is related to the phase of conditioning in which the lesions occur. Representative studies on rats, rabbits, and humans used various experimental preparations and performance criteria yet show patterns of effects on the acquisition and retention of a conditioned response (CR) for delay and trace paradigms based on the age of the memory (degree of consolidation)

Lesions of the hippocampus Before conditioning Early after conditioning Late after conditioning
 Delay paradigm CR acquisition: YES
Berger 1984
Chen et al. 1995
Daum et al. 1996
Ivkovich & Thompson 1997
Lee & Kim 2004
Port et al. 1986
Schmaltz & Theios 1972
Shors et al. 1992
Solomon & Moore 1975
Weizenkratz & Warrington 1979
CR retention: YES
Akase et al. 1989
Orr & Berger 1985
Port et al. 1986
CR retention: NO
(long ISI)
Beylin et al. 2001
CR retention: YES
Akase et al. 1989
 Trace paradigm CR acquisition: NO
Anagnostaras et al. 1999
Berry & Thompson 1979
Clark & Squire 1998
Garrud et al. 1984
Gabrieli et al. 1995
Ivkovich & Thompson 1997
James et al. 1987
Kaneko & Thompson 1997
Kim et al. 1995
Little et al. 1984
McGlinchey-Berroth et al. 1997
Orr & Berger 1985
Flores & Disterhoft 2009
Schmajuk et al. 1994
Schmaltz & Theios 1972
Solomon & Moore 1975
Solomon et al. 1990
Weiss & Thompson 1991a&b
Woodruff-Pak 2001
CR retention: NO
Kim et al. 1995
Moyer et al. 1990
Takehara et al. 2003
CR retention: YES
(short ISI)
Walker & Steinmetz 2008
CR retention: YES
Kim et al. 1995
Takehara et al. 2003
Lesions of the cortex Before conditioning Early after conditioning Late after conditioning
 Delay paradigm CR acquisition: YES
Mauk & Thompson 1987
McLaughlin et al. 2002
Oakley & Russell 1972
Takehara et al. 2003
Yeo et al. 1984
CR retention: YES
Oakley & Steele Russell 1972
Takehara et al. 2003
Yeo et al. 1984
CR retention: YES
Oakley & Steele Russell 1972
Takehara et al. 2003
Yeo et al. 1984
 Trace paradigm CR acquisition: YES
Frankland & Bobtempi 2005
McLaughlin et al. 2002
(short ISI)
Oakley & Steele Russell 1972
Simon et al. 2005
Takehara et al. 2003
Yeo et al. 1984
CR acquisition: impaired
Kronforst & Disterhoft 1998
McLaughlin et al. 2002
(long ISI)
Weible et al. 2000
CR retention: YES
Frankland & Bobtempi 2005
Oakley & Steele Russell 1972
Simon et al. 2005
Takehara et al. 2003
Yeo et al. 1984
CR retention: NO
Frankland & Bobtempi 2005
Oakley & Steele Russell 1972
Powell et al. 2001
Simon et al. 2005
Takehara et al. 2003
Yeo et al. 1984
Lesions of the amygdala Before conditioning Early after conditioning Late after conditioning
 Delay paradigm CR acquisition: YES but decelerated
Bechara et al. 1995
Blankenship et al 2005
Lee & Kim 2004
CR retention: YES but impaired
Lee & Kim 2004
McGaugh 2002
CR retention: YES
Lee & Kim 2004
 Trace
 paradigm
--Data not found
Predict CR acquisition: YES but decelerated
--Data not found
Predict CR retention: YES
Büchel et al. 1999
Chau & Galvez 2012
--Data not found
Predict CR retention: YES
Büchel et al. 1999
Chau & Galvez 2012
Lesions of the thalamus Before conditioning Early after conditioning Late after conditioning
 Delay paradigm CR acquisition: YES but decelerated
Buchanan & Thompson 1990
Halverson & Freeman 2006
--Data not found
Predict CR retention- Yes, but impaired
--Data not found
Predict CR retention- Yes, but impaired
 Trace paradigm CR acquisition: YES but decelerated
Halverson, Poremba, & Freeman 2008
Powell & Churchwell 2002
--Data not found
Predict CR retention- Yes, but impaired
--Data not found
Predict CR retention- Yes, but impaired

This effect is enhanced after ten training trials (Fig. 9b). In vivo, output pathways like the pontine pathway are supplemented by adaptively timed cerebellar response learning, which would strengthen these tendencies.

Experimental data when the ISI is relatively long, for example 1,500 ms in rats, do show deficits in the initial timing and amplitude of the CR, and in the time to acquire the CR, when the hippocampus is damaged. These experimenters (Beylin et al., 2001) counted any response within 500 ms of US onset as a CR. We do not simulate this finding due to the variability of these results. They can, however, be qualitatively explained if the sensory cortical responses habituate at later times when the CS is sustained for such long durations. Then an at least partial temporal gap would be created between internal CS activations and US onset. This kind of result could then be explained using the same mechanisms that are used to explain deficits during trace conditioning after hippocampal damage.

Delay and trace conditioning with and without amygdala

Simulations of amygdala lesions are also consistent with experimental data (graphs labeled A in Fig. 10). Delay conditioning with partial and complete amygdala lesions demonstrate the experimental finding (Lee & Kim, 2004) that the amygdala is required for optimal acquisition and retention of the CR, as reflected in the simulated hippocampal response amplitude for adaptive timing (R), the orbitofrontal cortical response amplitude (O), and especially the pontine response amplitude (P). To simulate partial lesions of the amygdala in delay conditioning, the gain of the excitatory inputs from the sensory cortex to the amygdala (Eq. 14, parameter β A) is lowered from the baseline value of 40 to 30, and then to 20. When the growth rate is thus attenuated, there is normal timing in delay conditioning but with a smaller peak amplitude in the amygdala, and also in the hippocampus, which depends upon amygdala-triggered Now Print signals to train the temporal distribution of spectrally timed hippocampal learning (Fig. 10a). The lower peak amplitude reflects the fact that in vivo there is slower and weaker learning of the adaptively timed response. The experimental finding that 4–5 more days of training rats with amygdala lesions can support learning of the CR (Lee & Kim, 2004) may also include support from extra-amygdala circuits. Additional training also improves learning in the model (Fig. 10b). However, when the amygdala is completely ablated before training, there is no hippocampal response. The cortical and pontine peak amplitudes show similar results.

Fig. 10.

Fig. 10

Simulations of amygdala lesions demonstrate that the amygdala is required for optimal acquisition but not for successful retention. (a) To simulate partial lesions of the amygdala before any training trials occur in delay conditioning (five training trials; unconditioned stimulus (US) onset at 550 ms, US duration = 50 ms, US offset at 600 ms, US level = 1), scalar β A in the amygdala excitation term in Eq. 14 was progressively decreased. The results based on the conditioned stimulus (CS)-only presentation during retention testing are presented on a single graph of the variables for sensory cortex (S), orbitofrontal cortex (O), hippocampus (H), amygdala (A), hippocampal adaptive timing (R), and pontine nuclei (P): normal (solid line), 25 % decrease (dashed line), and 50 % decrease (dotted line). These graphs show a marker for the US presented in training for reference only (vertical dashed lines). The CS is also represented (vertical solid lines). Accurate conditioned response (CR) peak amplitude timing as measured by R remained consistent in all cases as in vivo but require additional training for improved responses (see Fig. 10b). The activity profiles of the pontine nuclei vary with the strength and timing of cortical activity to effect a CR. In vivo they are supplemented by learning in the cerebellum, where an adaptively-timed association is made between signals from the tone CS pathway from auditory nuclei to the pons, and from the pons via mossy fiber projections to the cerebellum, where they are trained by signals from the reflex US pathway from the trigeminal to inferior olive nuclei and then via climbing fibers to the cerebellum (Christian & Thompson, 2003; Fiala, Grossberg, & Bullock, 1996). (b) Simulation after ten delay conditioning training trials after partial lesions of the amygdala. All other input parameters and output variables are the same as in Fig. 10a. The CR peak amplitude improved as measured by R. Again, the activity profiles of the pontine nuclei vary with the strength and timing of cortical activity. (c) Simulation of partial lesions of the amygdala before any training trials occur in trace conditioning (20 training trials, US onset at 750 ms, US duration = 50 ms, US level = 1) show that both the CR amplitude and timing as measured by R and P are negatively impacted: normal (solid line), 25 % decrease (dashed line), and 50 % decrease (dotted line). The activity profiles of the pontine nuclei (P) reflect the experimental data that amygdala is important in trace conditioning. (d) Trace conditioning with amygdala (A) ablated 100 % after 20 acquisition trials but just before the retention test. On retention test with CS only, normal activity profiles for CS and US in sensory cortex (S) and orbitofrontal cortex (O) support normal adaptively-timed response in hippocampus (R), indicating a time-limited involvement of the amygdala during acquisition. The activity profile of the pontine nuclei (P) also supports the simulation of the data that amygdala involvement is time-limited

The dynamics of the nSTART cortico-cortico-hippocampal loop explains how aversive conditioning can occur with partial amygdala lesions. Activity in the model orbitofrontal cortex, based in part on hippocampal and amygdala inputs (Eq. 7), continues to support adaptively timed learning via its input to sensory cortex (Eq. 2), and sensory cortical input to the hippocampal activation spectrum (Eq. 19) supports adaptively timed learning (Eq. 17). For this to occur, there has to be enough amygdala input to generate a Now Print signal that shapes the adaptively timed response through learning. In vivo, other circuits are also involved that are outside the scope of the nSTART model (see Fig. 2), such as cerebellum, hypothalamus, and basal ganglia, but their responses are not rate-limiting in simulating the main effects above.

The amygdala is required for delay conditioning acquisition, but not for its expression. The cortico-cortico-cerebellar circuit can execute the timed response after learning. Simulations of complete amygdala lesions (outputs of Eq. 14 for amygdala and Eq. 15 for conditioned reinforcement are both zero) show that there is no CR learned if the lesion is made pre-training, but an acquired CR is retained if the lesion is made post-training (Fig. 10d), in agreement with some experimental data (Lee & Kim, 2004; Sosina, 1992) but not all (McGaugh, 2002; Siegel, et al., 2015). Furthermore, while Büchel et al. (1999) had reported decelerated trace conditioning when amygdala lesions were made before training, simulation of a 50 % partial lesion of the amygdala before trace conditioning followed by a retention test after 60 training trials (US onset at 750 ms, US level = 1) still shows severe impairments compared with 20 training trials. Perhaps the lesion is so large that recovery may not be possible at all (Siegel, et al., 2015).

In particular, the amygdala has been found to be unnecessary for fear conditioning acquisition in Pavlovian experimental paradigms in which the aversive US is so negative that autonomic reflex pathways may control the learning (Lehman et al., 2000; Vazdarjanova & McGaugh, 1998). However, in appetitive learning and instrumental conditioning, the amygdala is always required for acquisition (Cahill & McGaugh, 1990; McGaugh, 2002). This latter property is explained by the model hypothesis that conditioned reinforcer learning and incentive motivational learning both involve the amygdala, and provide positive attentional feedback that supports the rapid category learning required to enable the CS to elicit a CR via the orbitofrontal cortex (Fig. 2). Within the dynamics of the nSTART model, this kind of amygdala-mediated motivated attention supports the acquisition of delay and trace conditioning by strengthening adaptively timed attentional shifts based on learned cues. After conditioning, both delay and trace CRs may be mediated more completely by fast cortico-cortical activation of recognition categories via learned cortical weights that serve to activate the adaptively-timed cerebellar motor response without continued need for involvement of the amygdala or the hippocampus.

The nSTART model predicts that, if both amygdala and hippocampus are ablated before or after delay conditioning, then the amygdala lesion most influences delay conditioning, as above. If both amygdala and hippocampus are ablated before trace conditioning, then the model proposes how the hippocampal damage prevents the CR from being learned, because the required cortico-cortical connections that establish long-term memory trace could not be formed using spectral timing as a temporal bridge. Finally, if both amygdala and hippocampus are ablated long enough after trace conditioning ends, then the model predicts that strong learned cortico-cortical associations will already have formed.

Such cortico-cortical learning, supported by amygdala and hippocampus, is a primary form of memory consolidation in the model, but this form of consolidation does not imply that the “same information” is transferred from associative links that involve amygdala and hippocampus to cortico-cortical associations. In addition, the mechanism for memory consolidation that is simulated by nSTART does not propose that memory engrams are quickly learned by the hippocampus and then slowly transferred to the neocortex, as some have proposed, a proposal that seems beset with fundamental difficulties. Rather, nSTART demonstrates how hippocampal endogenous activation capable of bridging the temporal gap can energize the strengthening and consolidation of cortico-cortical pathways that are the same pathways that were partially learned before consolidation begins.

For simplicity, the nSTART model lumps amygdala and hypothalamus together, and thus does not simulate how spared hypothalamic connections might enable responding after an amygdala lesion. The MOTIVATOR model (Fig. 4c; Dranias, Grossberg, & Bullock, 2008; Grossberg, Bullock, & Dranias, 2008) explicitly simulates hypothalamic, amygdala, and basal ganglia contributions to conditioning and motivated performance that are consistent with the current results, and that can be incorporated without undermining the current results in a future extended model.

Trace conditioning with and without hippocampus

Data from early, intermediate, and late stages of normal trace conditioning trace acquisition trials (McEchron & Disterhoft, 1997; Kim et al., 1995; Takehara et al., 2003) were simulated. In the nSTART model, learning to adaptively time a response to a stimulus is the result of an adaptively timed spectrum of cells. Figure 11a–e show the spectral activity and output during the simulation after the initial acquisition trial. This process unfolds as follows (see Fig. 7 for diagrams of network processing steps and Fig. 18 below for a complete circuit diagram).

As described in above, the signals f(x ij) are generated by the activities x ij(t) of the jth spectral cell (or cell population) (i,j) in response to the ith input Ii (Eqs. 1921, and Fig. 11a). Each x ij responds at a different rate r j to I i. In particular, we use i = 1 to represent the CS and i = 0 to represent the US. Thus, f(x 1j) signals are generated by the CS. They cause the release of chemical transmitters y 1j(t) that habituate, or are inactivated, at a rate proportional to their driving signals f(x 1j) (Eq. 19, and Fig. 11b). The transmitters interact with, or gate, their respective signals to generate gated sampling signals g 1j that are products of f(x 1j) and y 1j (Fig. 11c). These sampling signals g 1j are the differently timed responses of cell sites that together form the basis for spectrally timed learning.

Learning of the association between CS and US occurs at each spectral cell site only when its g 1j is positive. Thus, each g 1j samples learning of US activity that is correlated with it. Both the timing and rate of learning by the adaptive timing weights z 1j (Eq. 24) covary with the size of the corresponding g 1j. Due to the fact that the various g 1j have their peak activities at different times, each site is maximally sensitive to learning correlations with different delays between CS and US.

The signals g 1j give rise to adaptively timed outputs h ij = 8g ij z ij wherein the signals g 1j are multiplied, or gated, by their adaptive weights z 1j (Fig. 11d). When the adaptively weighted signals for all spectral components are added together, they form a total population output R that is adaptively timed to peak at, or near, the expected time of US onset. Thus, spectral timing is a property of an entire population of pathways that respond at different rates, not one of which, by itself, adequately represents accurate ISI timing. The hippocampal response after the initial acquisition trial is shown in Fig. 11e. Figure 11f shows data of McEchron and Disterhoft (1997) that exhibits similar timing from early acquisition trials. Figure 11g shows simulation output from the retention test after 20 acquisition trials; cf., Fig. 8.

The simulation of the property that trace conditioning depends on an intact hippocampus is shown in Fig. 12. The model proposes how a neurotrophic cascade from hippocampus to cortex supports learning of an associative connection between sensory cortex and orbitofrontal cortex in response to CS and US pairing during trace conditioning (Eq. 9). Unless there is enough time to build the cortico-cortical synaptic connections required to consolidate memory, both the timing and amplitude of learning rapidly degrade, as in anterograde amnesia.

Fig. 12.

Fig. 12

Optimal trace conditioning depends on adequate hippocampus function. (a) To simulate partial lesions of the hippocampus before any training trials occur in trace conditioning, scalar β H in the hippocampal excitation term in Eq. 16 was progressively decreased. This was followed by 20 training trials, with unconditioned stimulus (US) onset at 750 ms, US duration = 50 ms, and US amplitude = 1. The results of retention testing are shown for the activities of sensory cortex (S), orbitofrontal cortex (O), hippocampus (H), amygdala (A), hippocampal adaptive timing (R), and the pontine nuclei (P). These graphs show a marker for the US presented in training for reference only (vertical dashed lines). The conditioned stimulus (CS) is also represented (vertical solid lines). Compared with normal retention testing results after 20 acquisition trials results (solid line), a 50 % decrease (dashed line) gave a small reduction in conditioned response (CR) peak amplitude and retained good timing while an 80 % decrease (dotted line) caused deficits in both amplitude and timing. (b) While extended training (60 trials rather than 20) with 80 % ablation shows minor improvement in the amplitude and timing of R, the amplitude and timing of P remain too small to support a normal CR. An intact hippocampus is thus required for efficient trace conditioning

Figure 12a summarizes simulations of how various levels of hippocampal ablation (normal: solid line; 50 % ablation: dashed line; 80 % ablation: dotted line) cause progressively weaker responses that also become premature after sufficient ablation. These effects are due to the elimination of many, but not all, of the adaptively timed hippocampal cell responses that, taken together, span the ISI, as shown in Figs. 11a–e. The duration of this spectral activity is also a key to understanding the role of the hippocampus in trace conditioning and consciousness. Even in the case of an 80 % lesion, Fig. 12b shows that extended training yields some improvement in the timing and amplitude of response indicators for adaptive timing within the hippocampus (R) and the pontine nuclei (P).

The nSTART prediction of when and how the hippocampus is involved in cortical learning was described above and is illustrated by the simulation results in Fig. 13. Figure 13a simulates the property that the establishment of a long-term memory as a result of trace conditioning requires a critical consolidation period with a normally functioning hippocampus. Figure 13a (first row) compares effects of early hippocampal ablation with delayed hippocampal ablation on orbitofrontal peak amplitude, which provides one measure of the strength of the CR. In the partially trained case with five acquisition trials (first row, left column), a reduction in cortical activity results if the hippocampal ablation is made early (dotted line), immediately after acquisition and before the consolidation period, during which there are no stimulus (NS) trials before the CS, as compared with the activity that is attained after a late ablation (solid line), which is made after the NS trials and just before CS. In contrast, in the fully trained case after 20 acquisition trials (first row, right column), no impairment ensues. There is no difference in orbitofrontal activity between early hippocampal ablation (dotted line) and late hippocampal ablation (solid line) because cortico-cortical connections have already become sufficiently large before the ablation occurs. These simulations are in agreement with experimental data (Kim et al., 1995; McEchron & Disterhoft, 1997; Moyer et al., 1990; Takehara et al., 2003).

graphic file with name 13415_2016_463_Fig13_HTML.jpg

The adaptive weights from sensory cortex to orbitofrontal cortex for each of the cases in Fig. 13a (first row) are shown in Fig. 13a (second row). In particular, the lower two graphs show cortico-cortical adaptive weights that covary with the orbitofrontal cortical activity for each scenario. After partial training with five acquisition trials, early hippocampal ablation prevents an increase in adaptive weight because a critical source of incentive motivational support from the hippocampus is removed before the weight can reach an asymptote (Fig. 13a, second row, left column, dotted line). Late hippocampal ablation (Fig. 13a, left column, solid line) enables weight learning to benefit from this support. After 20 trials of training to asymptote, hippocampal support is no longer needed (Fig. 13a, second row, right column).

It should, however, be emphasized that activation of sensory cortex will continue to activate both the orbitofrontal cortex and hippocampus after learning is complete. This kind of memory consolidation does not imply that the “memory trace” moves from hippocampus to orbitofrontal cortex (cf., Nadel & Moscovitch, 1997).

When hippocampal BDNF is eliminated after acquisition trials (Fig. 13b), the simulation results are largely unchanged. However, when both hippocampal and orbitofrontal BDNF are removed after acquisition trials in the partially trained case (Fig. 13c, left column), there are the same deleterious effects on orbitofrontal activity (Fig. 13c, left column, first row) and on cortico-cortical weights (Fig. 13c, left column, second row) for both the early and late ablation treatments, due to the lack of orbitofrontal BDNF support for consolidation. In the fully trained case (Fig. 13c, right column), removal of hippocampal and orbitofrontal BDNF during early and late ablation treatments yield similar orbitofrontal activities (Fig. 13c, right column, first row) and cortico-cortical weights (Fig. 13c, right column, second row) because consolidation has already occurred. Measures of pontine activity in the model also support this analysis since they are driven by cortical input.

Delay and trace conditioning with and without thalamus or sensory cortex

Thalamic lesions negatively affect many types of learning since the thalamus is the gateway to perception and higher-levels of emotional and cognitive processing. Experimental data on thalamic lesions before delay or trace conditioning slow acquisition to some degree (Buchman & Thompson, 1990; Powell & Churchwell, 2002). However, the deficit is greater in trace conditioning than in delay conditioning, since there are then alternate paths available for auditory CS representations to the cerebellum.

The model predicts that lesions to the thalamus, with an equivalent effect on sensory cortex, that are made after delay or trace conditioning would also impair retention for two reasons: (1) disruption of stimulus input processing, and (2) damage to the pathways that support cortico-cortical learning of the association between CS and US, which also serve to control CR performance in the post-consolidation stage of learning. Figure 14 shows that general CR acquisition is impaired in proportion to the extent of the lesion, as reflected in the simulated hippocampal response amplitude (R), orbitofrontal cortex (O), and pontine nuclei (P). The simulations show that, as in vivo for thalamic lesions, the disruption to trace conditioning (Fig. 14b) is more severe than disruption to delay conditioning (Fig. 14a). Extended training (doubling the number of training trials) improves performance for delay conditioning (Fig. 14c) but causes little improvement for trace conditioning in the lesion cases, although it does cause improvement in the no lesion case (Fig. 14d).

Fig. 14.

Fig. 14

Simulations of lesions of the thalamus, with equivalent effects on sensory cortex, demonstrate that the sensory cortex is required for optimal acquisition and retention in both delay and trace conditioning. To simulate partial lesions of the sensory cortex before any training trials occur, scalar β S in the sensory cortex (Eq. 2) was progressively decreased: normal = solid line, 25 % decrease = dashed line, and 50 % decrease = dotted line. The results of retention testing by conditioned stimulus (CS) presentation are shown for sensory cortex (S), orbitofrontal cortex (O), hippocampus (H), amygdala (A), hippocampal adaptive timing (R), and the pontine nuclei (P). Vertical dashed lines mark the time of unconditioned stimulus (US) presentation during training, but not recall, trials. Vertical solid lines mark the onset and offset of the CS during training trials. Lesions to the sensory cortex weaken learning as a function of the conditioning paradigm and the extent of the lesion, with a special focus on O and P. (a) Recall after five training trials of delay conditioning in all three cases. (b) Worse trace conditioning was seen in the lesioned cases, even after 20 training trials, than in the corresponding delay conditioning cases in (a). (c) Doubling the number of training trials during delay conditioning to ten training trials improved performance in all three cases. (d) Doubling the number of training trials during trace conditioning to 40 trials improved performance in the no-lesion case, but had a negligible effect in the two lesioned cases

Conditioning, consciousness, and amnesia

The link between consciousness and conditioning (Clark, Manns, & Squire, 2002) is clarified by contrasting what happens during delay versus trace conditioning in normal and amnesic subjects. The nSTART model requires a sustained interaction of sensory cortex, orbitofrontal cortex, and hippocampus to achieve trace conditioning. From his clinical data from brain-damaged patients, Damasio (1999, pp. 157–158, 195ff, 265) heuristically derived a CogEM-type model and noted that conscious awareness of “the feeling of what happens” relies on a sustained feedback interaction. The nSTART model (Fig. 2) builds on the START model (Grossberg and Merrill, 1992, 1996) to explain this sort of data with its prediction that this sort of conscious awareness is supported by a sustained, adaptively timed, cognitive-emotional resonance, which is mechanized as a temporal-amygdala-orbitofrontal resonance that is supported by hippocampal feedback. This specific resonance specializes the ART prediction that “all conscious states are resonant states” (Grossberg, 1999). This explanation clarifies why trace conditioning is facilitated by conscious awareness but delay conditioning is not, why a normal subject may not be consciously aware of delay conditioning, and why amnesics with bilateral hippocampal lesions perform like unaware controls on delay and trace conditioning.

In particular, the emotional path via amygdala operates more quickly than the cognitive path of self-awareness via hippocampus. Furthermore, during delay conditioning, adaptively-timed responding can be controlled through the cerebellum, so the hippocampus is not a critical component of successful delay conditioning and, thus, neither is awareness.

Recent experiments have supported the CogEM prediction (Grossberg, 1975, 1984) that emotional responses are part of an attentive cognitive-emotional resonance, and that amygdala activity may be influenced by factors such as stimulus valence, attentional load, competing cognitive task demands, and ambiguity (Pessoa, Padmala, & Morland, 2005; Pessoa, Japee, & Ungerleider, 2000). These experimental results are, moreover, consistent with the hypothesis that a sustained cortico-cortico-hippocampal resonance supports consciousness, since parallel hippocampal and amygdala activations occur during normal conditioning. Indeed, adaptively-timed hippocampally timed cognitive-emotional resonances are predicted to help prevent premature reset by the attentional focus on a valued goal object expected disconfirmations by task-irrelevant cues (Grossberg & Merrill, 1992, 1996). A hippocampal role is also consistent with the facts that lesions to the amygdala slow acquisition of delay conditioning, but do not impact already acquired responses (Lee & Kim, 2004) and that, although amygdala plays a key role in associative learning, researchers also note that: “circuitry within the amygdala (AM) or a closely related structure is necessary for some aspects of the formation, maintenance, or expression of these CRs” (Choi & Brown, 2003, p. 8713).

Anterograde and retrograde amnesia

The model clarifies data related to the production of retrograde amnesia due to ablation of the medial prefrontal cortex before, during, or after completion of the consolidation process. Whereas the hippocampus is necessary for the acquisition and consolidation of trace conditioning – the lack thereof causes anterograde amnesia and recent retrograde amnesia (Clark, Broadbent, Zola, & Squire, 2002; Clark & Squire, 1998; Gabrieli et al., 1995; McGlinchey-Berroth et al., 1997; but see also Bayley, Frascino, & Squire, 2005) – the medial prefrontal cortex is necessary for the retention of a high percentage of CRs after trace conditioning occurs in normal subjects. In agreement with data (Kronforst-Collins & Disterhoft, 1998), the simulated CR that results when the orbitofrontal cortex is ablated before or after 20 trace conditioning trials shows impaired timing and amplitude in the pontine nuclei responses (Fig. 15b and d, respectively). Takehara et al. (2003) analyzed this phenomenon as a failure to retain or retrieve memory of the associated adaptive response, and not a simple failure of adaptive timing, because the ablation in their experiments did not affect CR timing. In the nSTART model, the notion that the orbitofrontal cortex provides a critical pathway that helps to read-out the conditioned response via connections to the pontine nuclei is consistent with this retrieval interpretation. In addition, since direct damage to motor cortex does not impair trace eyeblink conditioning (Ivkovich & Thompson, 1997), an alternative interpretation that a motor circuit has failed is not supported.

Fig. 15.

Fig. 15

Pre-training orbitofrontal cortical lesions do not impair delay conditioning as much as trace conditioning. Scalar β O in the orbitofrontal cortex (Eq. 7) was progressively decreased to simulate a lesion. In (a) and (b), the unlesioned normal case = solid line, 5 % lesion = dashed line, and 10 % lesion = dotted line. The conditioned stimulus (CS) and unconditioned stimulus (US) inputs were chosen as in Fig. 14. The results of retention testing due to CS presentation are shown by graphing the activities of sensory cortex (S), orbitofrontal cortex (O), hippocampus (H), amygdala (A), hippocampal adaptive timing (R) and pontine nuclei (P): (a) Delay conditioning with five acquisition trials. (b) Trace conditioning with 20 acquisition trials. (c) Complete lesions after delay conditioning with five acquisition trials do not impact the ability to perform the conditioned response (CR) as reflected in R and P amplitudes, although timing of P is impaired. (d) Complete orbitofrontal lesions after trace conditioning with 20 acquisition trials greatly reduce the ability to perform the CR as reflected in collapsed R and P amplitudes, and a failure of P timing. Thus orbitofrontal cortex is required for performance after trace conditioning in the data and the model

In the nSTART model, orbitofrontal cortical ablation also interferes with the ability of the CS to sustain the learned cortico-cortical resonance that results in an adaptively timed response profile of the CR in the hippocampus. Indeed, anterograde amnesia may also result if new memories cannot be consolidated due to cortical insult that prevents, or greatly weakens, such a resonance (see Fig. 13c). Figure 15a and c show that, when the model orbitofrontal cortex is ablated before or after five delay conditioning trials, the CR is not negatively affected, which fits data showing that delay conditioning does not require conscious awareness of the stimulus contingencies (Clark & Squire, 1998; Manns, Clark & Squire, 2001) and that amnesics can learn delay conditioning, but not trace conditioning (Clark, et al., 2001).

The intact hippocampus may also support sustained conscious resonance during normal delay conditioning, but it is not required for the ISI durations in the cited studies: “…those conditioning tasks that require the integrity of the hippocampus are the same tasks that aware participants can acquire and unaware participants cannot…” (Clark & Squire, 2004, p. 1467). In particular, for these ISIs, there may not have been enough time to generate a fully developed conscious cognitive-emotional resonance.

These simulation results display the temporal properties of hippocampal and cortical involvement in normal learning involving declarative memory. Amnesia data properties, such as the loss of recent memory, the inability to form new memory, or the loss of remote memory, are consistent with these dynamics in terms of the age of the memory when processing becomes abnormal: with hippocampal injury, new memories rapidly perish while old memories persist; with cortical injury (Fig. 13), new memories might be formed with support from other structures, depending on what cortical structures were damaged, while old memories that critically depend on the cortex perish. Cortical injury may involve the lack of activity in ablated areas, or hyperactivity in the remaining functioning cells (Li, Bandrowski, & Prince, 2005). In any case, the magnitude of the learning deficit depends on locations and scope of damage. Specific effects of interruption on learning and memory – that is, the type of amnesia – are dependent on the task, the stage of learning, and the specific brain area that is deficient, among other variables. The current model illustrates how lesions of several different brain areas, at different times before, during, or after the course of learning, can differentially contribute to this complex pattern of behavioral deficits.

In summary, the nSTART model simulates and qualitatively explains key data patterns concerning how thalamic, prefrontal cortical, amygdala, and hippocampal lesions may influence learning and memory. These data patterns are summarized in Table 1, including, for example, the hallmark hippocampal activity profiles over time during delay conditioning (Berger et al., 1980) and trace conditioning (McEchron & Disterhoft, 1997), the role of hippocampal and cortical lesions in influencing acquisition and retention of recently learned versus remotely learned eyeblink responses (Kim et al., 1995; Takehara et al., 2003), and the ability of amnesic individuals to do delay conditioning, but not trace conditioning, along with corresponding differences in conscious awareness (Clark et al., 2001).

Additional data support the conclusion that the hippocampus is typically essential during acquisition of trace conditioning, while the neocortex is needed for normal retention. In particular, research in discriminative avoidance conditioning found that hippocampal control of thalamo-cortical excitatory volleys determined timing of CR output during acquisition; otherwise, signals from anterior ventral thalamic nuclei and feedback from cingulate cortex area 29 determined timing of CR output during maintenance of learning (Gabreil, Sparenborg, & Stolar, 1987). These data support the facts that, while recent Nictitating Membrane Response (NMR) learning involving the trace conditioning paradigm is severely impaired by hippocampal lesions, its acquisition is resistant to cortical lesions. Conversely, NMR trace conditioning retention is not impaired by hippocampal lesions, but it is impaired by cortical lesions (Frankland & Bontempi, 2005; Oakley & Steele Russell, 1972; Simon, Knuckley, Churchwell, & Powell, 2005; Takehara et al., 2003; Yeo, Hardiman, Moore, & Steele Russell, 1984). In cases where the ISI is relatively short, the hippocampus is not required to support acquisition of the CR (Beylin et al., 2001), corresponding to nSTART short-term memory circuits the persistent activities of which in both sensory cortical and amygdala representations are capable of bridging short temporal gaps.

The nSTART model proposes how the hippocampus consolidates learning of thalamo-cortical and cortico-cortical associations by using the same adaptively-timed pathways by which the hippocampus learns to adaptively time the appropriate duration of motivated attention in a task-selective manner (Grossberg & Merrill, 1992, 1996). By means of a consolidation process that is driven by BDNF-mediated endogenous hippocampal bursting, which in vivo is also driven by continual periodic septal input (Smythe et al., 1992), and BDNF modulation of local, activity-dependent circuits (Schuman, 1999; Thoenen, 1995; Tyler et al., 2002), these associations are stored and recalled in cortico-hippocampal, hippocampo-cortical and cortico-cortical pathways (Sakurai, 1990), as demonstrated through nSTART computer simulations of the corresponding model pathways and mechanisms.

The fact that amygdala is not required after consolidation of Pavlovian conditioning does not contradict the claim of the CogEM model that amygdala is required for reinforcement learning for CR acquisition and performance. The polyvalent constraint on CogEM during learning is not required for performance in the consolidated case of aversive conditioning because the cortico-cortical connection along with extra-amygdala circuits, such as those involving volitional signals from the basal ganglia, would be sufficient to support performance. Indeed, Chang, Grossberg, and Cao (2014) have shown how such a convergence between cortico-cortical and basal ganglia volitional signals can initiate a directed search for a desired goal object in a cluttered scene, thereby illustrating how the Where’s Waldo problem may be solved.

Discussion

Five different types of learning interact during conditioning and memory consolidation

The nSTART model proposes that at least five different types of learning typically occur in parallel to ensure that associations can be formed and consolidated across temporal gaps, as occurs during trace conditioning (Fig. 2). As described above, the nSTART model includes: CS category learning via thalamo-cortical and cortico-cortical circuits, conditioned reinforcement learning via thalamo-amygdala and sensory cortical-amygdala circuits, incentive motivational learning via amygdala-orbitofrontal cortical circuits, and adaptively-timed learning of motivated attention via sensory cortical-hippocampal-orbitofrontal cortical circuits. There is also adaptively-timed learning of motor responses via the cerebellum (Figure 16), but this is not simulated in the current study. The key brain structures and processes explicitly represented in the nSTART model are summarized in Table 2.

graphic file with name 13415_2016_463_Fig16_HTML.jpg

Multiple hippocampal functions: Space, time, novelty, consolidation, and episodic learning

The nSTART model does not presume to summarize all the functional roles that are played by the hippocampus in vivo. The hippocampus is known to participate in multiple functions, including spatial navigation, adaptively-timed conditioning, novelty detection, and the consolidation of declarative (notably, episodic) learning and memory. The hippocampus hereby raises a general issue that is confronted whenever one tries to understand how a given brain region works: Why does each brain region support a particular combination of processes, rather than a different one? How do these processes interact in a way that makes functional sense of their anatomical propinquity? Related neural models have clarified how some of these other processes work, and why they are near one another anatomically. They are briefly reviewed in this section. The articles that develop these models include citations of many relevant experimental data.

In particular, these models indicate that more than one hippocampal process may be at work in parallel during memory consolidation. This expanded view of memory consolidation is clarified by model explanations of why novelty detection has been linked to the process of memory consolidation during the learning of recognition categories, whether or not this learning needs to bridge a long temporal gap. Adaptive Resonance Theory, or ART, proposes how a memory search can occur during the learning of recognition categories, and how a sufficiently big mismatch between learned top-down expectations and bottom-up feature patterns can activate the novelty-sensitive orienting system (Fig. 3), which includes the hippocampus, to drive a memory search for a better matching category. The size of such a mismatch registers how novel the current stimulus is when calibrated against active top-down expectations. ART explains how such memory searches lead to learning of a stable, or consolidated, recognition category that requires no further searches, and thus to the cessation of hippocampal novelty potentials (Figs. 3 and 17). After consolidation of a category is complete, presentation of a familiar object exemplar causes direct access to the globally best-matching category via thalamo-cortical and cortico-cortical pathways.

Fig. 17.

Fig. 17

In the START model framework, ART category learning circuits and Spectral Timing circuits can both inhibit the orienting system: When a good enough match occurs between a feature pattern at level F 1 and the top-down expectation from the category level F 2, inhibition can occur of the orienting system A, thereby preventing a memory search. If inhibition from the cognitive-emotional sensory-drive (S − D) resonance that is supported by hippocampal adaptive timing also inhibits A, then the orienting system again cannot fire until the adaptively timed signal is removed. The former mechanism clarifies how hippocampal novelty potentials fade away as thalamo-cortical and cortico-cortical category learning consolidates. The latter mechanism clarifies how orienting responses are inhibited during expected disconfirmations

Carpenter and Grossberg (1993) and Grossberg (2013) have noted how these properties can qualitatively explain quite a few data about medial temporal amnesia when the model hippocampus is ablated, thereby eliminating memory search during the consolidation process. These properties include unlimited anterograde amnesia, limited retrograde amnesia, perseveration, difficulties in orienting to novel cues, a failure of recombinant context-sensitive processing, and differential learning by amnesics and normals on easy versus demanding categorization tasks.

Thus, in addition to the important role of adaptively-timed hippocampal responses in bridging temporal gaps when events to be associated are separated in time, the hippocampus is also part of the novelty-sensitive memory search system for consolidating thalamo-cortical and cortico-cortical category learning. Both of these processes are included in START model circuits (Fig. 6), but without the enhancements that have enabled nSTART to simulate challenging data about early versus late lesions of amygdala, hippocampus, and orbitofrontal cortex during delay and trace conditioning.

The adaptively-timed hippocampal circuits are part of a larger theory about why both spatial and temporal representations exist within the entorhinal-hippocampal system. Neural models have provided a unified explanation of how these spatial representations (Mhatre, Gorchetchnikov, & Grossberg, 2012; Grossberg & Pilly, 2012, 2014; Pilly & Grossberg, 2012, 2014) and temporal representations (Grossberg & Merrill, 1992, 1996; Grossberg & Schmajuk, 1989) may arise in the entorhinal-hippocampal system during development and adult learning, and how they interact with other brain regions to control navigational behaviors and episodic learning and memory. This explanation emphasizes the fundamental role of brain designs for learning, attention, and prediction, and along the way articulates a rigorous mechanistic sense in which the hippocampus is indeed a “cognitive map” (O’Keefe & Nadel, 1978). This learning perspective also leads to the prediction that the network laws that give rise to the apparently very different behavioral properties of space and time are controlled by mechanistically homologous brain mechanisms, thereby clarifying why these spatial and temporal representations both occur in the entorhinal-hippocampal system, and how they can thus more easily interact to control navigation and episodic memory.

The timing model in question is the Spectral Timing model that has been used to explain and simulate data about normal and abnormal delay and trace conditioning (Grossberg & Merrill, 1992, 1996; Grossberg & Schmajuk, 1989). Due to the computational homolog between spatial and temporal representations, the spatial model is called the Spectral Spacing model (Grossberg & Pilly, 2012, 2014). Both models learn to represent spatial and temporal properties of the environments that animals or humans experience (Gorchetchnikov & Grossberg, 2007).

In the case of the Spectral Spacing model, this learning leads to grid cell receptive fields of multiple spatial scales along the dorsoventral axis of the medial entorhinal cortex that cooperate to form hippocampal place cells that can represent large spaces. In the case of the Spectral Timing model, this learning enables “time cells” that response at multiple temporal scales to cooperate to represent large time intervals. As noted earlier, the Spectral Timing model predicted in the 1980s the properties of time cells that have been reported in the hippocampus during the past few years, notably their Weber law properties. In both the Spectral Spacing and Spectral Timing models, a spectrum of cell rates generates a spatial gradient of cells with different properties. In the case of the Spectral Spacing model, grid cells with increasing spatial scales are learned along the dorsoventral axis of the medial entorhinal cortex. In the case of the Spectral Timing model, time cells with increasing onset times and variances are generated. It has been shown how Spectral Timing can be achieved using properties of the metabotropic glutamate receptor (mGluR) system, which proposes a biochemical basis for the ability of these cells to span such long time intervals (Fiala, Grossberg, & Bullock, 1996). An open question is whether the Spectral Spacing model uses a similar mechanism, suitably specialized?

These homologous spatial and temporal mechanisms have been used to provide a unified theoretical explanation, and quantitative computer simulations, of a body of challenging behavioral and neurobiological data about both space and time that have no other unified explanation at this time, leading to the name neural relativity for this mechanistic homology. In particular, the current study proposes how at least some time cells may participate in memory consolidation that requires the ability of the hippocampus to bridge across temporal gaps between stimuli that are associated through conditioning.

The coexistence of spatial and temporal learning in the hippocampus may support its role in episodic learning and memory, since episodic memories typically combine both spatial and temporal information about particular autobiographical events (Eichenbaum & Lipton, 2008; Tulving, 1972). The nSTART model does not include spatial representations, or the prefrontal working memory and list chunking networks for temporary and long-term storage of sequential information, and thus does not attempt to explain data about episodic learning and memory. Activation of such spatially-dependent episodic memories may always require hippocampal spatial representations, so a restricted gradient of retrograde amnesia may not be expected after hippocampal lesions that eliminated them. As noted within the “multiple traces” proposal of how memory consolidation works (Nadel & Moscovitch, 1997, p. 222): “The most parsimonious account of the data would be to assume that the hippocampal complex and neocortex continue to be involved in both the storage and the retrieval of episodic memory traces throughout life.”

Episodic memories may depend upon knowledge of sequences of correlated object and spatial information, not just information about individual ones. This kind of sequential information is also important for carrying out context-sensitive searches for desired objects in scenes. For example, seeing a refrigerator and a stove at particular positions in a familiar kitchen may generate an expectation of seeing a sink at a different position. A large psychophysical database about contextual cueing (e.g., Brockmole et al., 2006; Chun, 2000; Chun & Jiang, 1998; Jiang & Wagner, 2004; Lleras & von Mühlenen, 2004; Olson & Chun, 2002) describes how both object and spatial information contribute to such expectations, while they drive efficient searches to discover and act upon desired goal objects. The ARTSCENE Search model (Huang & Grossberg, 2010) simulates how computation of spatial and object working memories, list chunks, and spatial and object priming signals may be accomplished using interactions between the perirhinal and parahippocampal cortices (Bar, Aminoff, & Schacter, 2008; Brown & Aggleton, 2001; Epstein, Parker, & Feiler, 2007; Murray & Richmond, 2001), prefrontal cortex, temporal cortex, and parietal cortex to simulate key psychophysical data from contextual cueing experiments. The nSTART, ARTSCENE Search, and Spectral Spacing models may in the future be fused to provide a foundation on which to build a more complete theory of episodic learning and memory.

Alternative models of memory consolidation

The popular unitary trace transfer hypothesis assumes that there is a memory representation that is first stored in the hippocampus and then transferred to the neocortex to be consolidated (McClelland, McNaughton, & O’Reilly, 1995; Squire & Alverez, 1995). McClelland et al. (1995) thus propose “a separate learning system in the hippocampus and why knowledge originally stored in this system is incorporated in the neocortex only gradually” (p. 433). This hypothesis is justified by the assumption that the hippocampus can learn quickly, but the neocortex can only learn slowly, so the hippocampus is needed to first capture the memory and then that same memory representation is transferred to the more slowly learning neocortex. There are, however, fundamental conceptual and mechanistic problems with a unitary trace transfer hypothesis as presented by McClelland et al. (1995) that persist in more recent expositions (Atallah, Frank, & O’Reilly, 2004; O’Reilly & Rudy, 2000): a representation problem, a learning rate problem, and a real-time learning problem. These problems are illustrated by considering how the unitary trace hypothesis might explain how a normal person can see a movie once and remember it well enough to describe it later to a friend in considerable detail, even though the scenes flash by quickly.

The representation problem concerns the implicit claim that the hippocampus can represent and store all the remembered visual and auditory memories in the movie. There seems to be no experimental evidence, however, that the hippocampus contains such specialized perceptual representations. Moreover, if the hippocampus did contain all the perceptual representations that were needed to represent all visual and auditory memories, then what does the specialized perceptual circuitry of visual and auditory neocortex do? In this regard, the unitary trace modelers never simulate the perceptual contents of the memories that are assumed to be stored in hippocampus and transferred to neocortex.

The learning rate problem concerns the factual basis for the claim that the neocortex must learn slowly. In fact, there are numerous examples that fast perceptual and recognition learning can occur in the neocortex (e.g., Fahle, Edelman, & Poggio, 1995; Kraljic & Samuel, (2006); Sireteanu & Rettenbach, 1995, Stanley & Rubin, 2005; Wagman, Shockley, Reley, & Tervey, 2001). In addition, no evidence is presented by unitary trace transfer theorists that there are slower learning synapses in neocortex than hippocampus. Even one of the proponents of the slow cortical learning hypothesis has equivocated on this point: “data that appear to support the limited cortical learning view tend to be based on larger lesions of the medial temporal lobe…it is becoming clear that the cortex is capable of quite substantial learning on its own…” (O’Reilly & Rudy, 2000, p.395).

The real-time learning problem is admitted by the modelers but not solved. A model that has been used in unitary trace model simulations is back propagation. It is well-known that this model is not biologically plausible (e.g., Grossberg, 1988, Section 17). Back propagation must carry out slow learning. Its adaptive weights can change only slightly on each learning trial, thus requiring large numbers of acquisition trials to learn every item in its memory. If the learning rate is sped up, then the model can experience catastrophic forgetting. It is incapable of the kind of fast learning that is experienced while watching a movie or other rare but motivationally engaging series of events. It can only carry out supervised learning, which means that an explicit teacher provides external feedback about the correct response on every learning trial, unlike the unsupervised learning that is characteristic of many biological learning experiences, including watching a movie. Its learned weights are computed using an unrealistic non-local weight transport mechanism that has no analog in the brain. Finally, because of its slow learning requirement, it is important that the data that are being learned have stationary statistical properties, so that each weight gets enough exposure to these properties over many learning trials to enable enough weight growth to occur. In other words, the probabilities of sequential events do not change through time, unlike the world in which we live.

In order to manage these weaknesses of back propagation, McClelland et al. (1995) developed their model based on a process of interleaved learning which is said to occur when memories are slowly transferred from the hippocampus to the neocortex via incremental adjustments in the neocortical representations, while being supervised by hippocampal teaching signals. Various sets of parameter values were used to fit their model to each of four data sets with varying degrees of success. Nevertheless, the authors state that such “…interleaved learning systems… are not at all appropriate for the rapid acquisition of arbitrary associations between inputs and responses” (McClelland et al., 1995, p. 432); in other words, their proposed model cannot do learning in real time.

Similar explanatory limitations are faced by connectionist models such as the one proposed by Moustafa, et al. (2013) that does not simulate biophysical properties of neurons, does not use a model that describes the anatomical areas involved in delay and trace conditioning, and does not consider the consolidation process. In addition, this model assumes a non-existent direct connection from hippocampus to motor output.

Beyond the self-criticism offered by MeClelland et al. (1995), the unitary trace view of memory consolidation has come under criticism from various researchers on both theoretical and experimental grounds. McGaugh (2000) points to protein synthesis and various neurotransmitters as providers of endogenous modulation of consolidation. In his view, the supposition that the molecular and cellular machinery of consolidation memory works slowly is “clearly wrong” (p. 248). Rather, consolidation seems slow because on-going experience modulates memory strength. In McGaugh's view, the amygdala plays a central role in modulating memories and, thus, in memory consolidation. Lesions of the amygdala disrupt the influence of epinephrine and glucocorticoids from the adrenal gland and, therefore, the consolidation process. In this view, the time-limited role of the hippocampus is to serve as a locus in memory processing in a wider consolidation circuit that includes bidirectional cortico-hippocampal interactions. Nadel and Bohbot (2001) inferred a process of consolidation from retrograde amnesia, but do not see consolidation as a transfer of memory from the hippocampus to other areas. Rather, interactions between systems preserve their respective specializations. All of these heuristic proposals have points of contact within the nSTART model.

Building on the critique of McClelland et al. (1995) given in Grossberg & Merrill (1996), the nSTART model embodies a quite different proposal of hippocampal function than that of the MeClelland et al. (1995) model of consolidation. The nSTART model avoids the representation problem because neocortex and hippocampus learn different things. It avoids the learning rate problem because neocortex can learn as fast as sensory inputs and modulatory processes allow. It avoids the real-time learning problem because the fast real-time incremental learning that ART, CogEM, and START allow does not require unrealistic learning mechanisms such as interleaving, and works well in environments whose statistics can change unpredictably through time (Carpenter & Grossberg, 1991, 1993; Grossberg, 2003, 2007, 2013; Grossberg & Levine, 1987; Grossberg & Merrill, 1992, 1996; Grossberg & Schmajuk, 1987, 1989).

Additionally, the nSTART model proposes how three basic learning problems are solved: It enables fast motivated attention to be paid to salient objects and events using pathways to and from the amygdala that support conditioned reinforcer and incentive motivational learning (Figs. 2, 4, 5 and 6). It maintains motivated attention for an appropriate duration on salient objects and events using an adaptively-timed cortical-hippocampal-cortical circuit that also inhibits unwanted orienting reactions (Fig. 6). Finally, it prevents premature responses using adaptively-timed cerebellar motor learning (Figs. 2 and 16). Thus, the hippocampal influence on cortical learning is not just a transfer of the same memory trace, but rather the result of interactions between multiple types of learning. An enhanced understanding in nSTART of the role of neurotrophins in the creation and maintenance of memory and the role of attention in the generation of awareness and self-consciousness builds upon this analysis.

Clinical relevance of BDNF

In line with recent work on the etiology and treatment of neurological diseases such as Alzheimer’s, Parkinson’s, Huntington’s, epilepsy, Rett’s syndrome, and neuropsychiatric disorders such as depression, bipolar, anxiety-related, schitzophrenia, and addiction (Autry & Monteggia, 2012; Hu & Russek, 2008), the nSTART model is consistent with clinical treatments for impaired cognitive function that implicate an important role for BDNF. In clinical applications, the deleterious effects on synaptic and behavioral plasticity associated with low-levels of BDNF may be reversed by exercise (Molteni et al., 2004), a finding with obvious relevance to educational intervention as well. Treatments that include cognitive and physical exercise have been shown to increase BDNF levels and to relieve symptoms (Cotman & Berchtold, 2002). In addition, BDNF levels, low in proportion to the severity of mania and depression, increase with clinical improvement using antidepressants and mood stabilizers (Post, 2007). However, too much excitation can cause problems and require therapies to down-regulate BDNF and related processes (Birnbaum et al., 2004; Koyama & Ikegaya, 2005).

Mathematical equations and parameters

nSTART model overview

nSTART is a real-time neural network with multiple feedforward and feedback connections. On-center off-surround membrane, or shunting, equations with terms for spontaneous decay, input-driven excitation and inhibition, and recurrent excitation and inhibition represent a rate-based approximation to Hodgkin-Huxley dynamics. These equations were integrated over time using the Runge–Kutta 4 method for ODE numerical integration written in MatLab 12.1 running under the Windows 8 operating system on an Intel Quad Core microprocessor. The equations demonstrated the reported qualitative properties over a wide range of parameter choices. Final parameter selection was based on the goal of running all of the simulations using a single set of parameters. Figure 18 shows the mechanistic circuit diagram of the interacting nSTART pathways and processes that were illustrated in Figs. 2 and 7 and qualitatively described above. The equations are formally described below. Table 2 presents all system variables and their initial values as well as the parameters with their values.

The model was tested by simulating data from reinforcement learning experiments, notably classical conditioning experiments. To simplify the model, we use two types of input: I i, i ≥ 1, which turns on when the ith CS, CSi, occurs, and I 0, which turns on when a US occurs. I i activates the i th sensory representation S i. Another population of cells A represents a drive representation in the amygdala. It receives a combination of sensory, reinforcement, and homeostatic (or drive) stimuli. Reinforcement learning, emotional reactions, and motivated attention decisions are controlled by A. During conditioning, presentation of a CS (I 1) before a US (I 0) causes activation of sensory cortical activity S i followed by activation of A. Such pairing strengthens the adaptive weight, or long-term memory trace, in the modifiable synapses from S i to A, and converts CS i into a conditioned reinforcer. Conditioned reinforcers hereby acquire the power to activate A via the conditioning process. These and other learning and performance processes of the nSTART model are defined by the following equations and parameters.

Sensory cortex and thalamus

Sensory cortical dynamics

Cell activity, or voltage V(t), in vivo can be represented by the membrane, or shunting, equation:

CddtV=V+Vg++VVg+VpVgp, 1

where C is capacitance; the constants V +, V , and V pare excitatory, inhibitory, and passive saturation points of V, respectively; and g +, g , and g p are conductances that can be changed by inputs (Grossberg, 1968b; Hodgkin, 1964). In the model equations, V is replaced with a symbol that represents the activity of a particular cell (population) in the network. A basic processing unit in the model is a network of shunting neurons that interact within a feedforward and/or feedback on-center off-surround network whose shunting dynamics contrast-normalize its cell activities (Grossberg, 1973, 1980). These networks also have a total activity with an upper bound that tends to be independent of the number of active cells.

The activity S i of the ith sensory cortical cell (population) obeys:

ddtSi=15Si+βS1SiIi+fSSi1+OiSmi15SikifSSk1+Ok. 2

The inputs I i are turned on and off by presentation and termination of a CS input (I 1) or US input (I 0) over time. Term − 15S i describes passive decay of activity S i. Term β S(1 − S i)(I i + f S(S i)(1 + O i))S mi describes excitatory interactions in response to input I i, notably the recurrent on-center excitatory feedback signal f S(S i) from population S i to itself (Eq. 4), the top-down modulatory attentional input O i from orbitofrontal cortex, and the habituative transmitter S mi that depresses these excitatory interactions in an activity-dependent way (Eq. 6). Excitation is scaled by parameter β S. Due to the shunting term β S(1 − S i) inβ S(1 − S i)(I i + f S(S i)(1 + O i))S mi, activity S i can continue to grow until it reaches the excitatory saturation point, which is set to 1 in Eq. 2. Term 15SikifSSk1+Ok describes lateral inhibition of S i by competitive feedback signals f S(S k) from the off-surround of other sensory cortical activities S k, k ≠ i, modulated by the corresponding top-down orbitofrontal signal O k. Due to the excitatory feedback signals, a brief CS input (I 1) gives rise to a sustained STM activity S i which can remain sensitive to the balance of signals across the network due to its shunting off-surround, notably by competition from activation in response to the US input (I 0).

The dynamics of (sensory cortical)-to-(orbitofrontal cortical) circuits are modeled (Fig. 2). For simplicity, activity levels of thalamus (T i) and sensory cortex (S i) are lumped into a single representation:

TiSi. 3

With this convention in mind, simulation results may interchangeably mention thalamo-cortical or cortico-cortical connectivity, as required by a given context.

Signal functions in recurrent on-center off-surround shunting network

The signal function f S(S k) in Eq. 2 is a particularly simple faster-than-linear signal function, one that is half-wave-rectified, and then linear above an output threshold: (Grossberg, 1973):

fSSk=Si0.02+maxSi0.02,0, 4

where 0.02 is the threshold value that must be exceeded for the signal to become positive. Faster-than-linear signal functions tend to suppress noise while contrast-enhancing the most active cell activity and making winner-take-all choices in networks such as (Eq. 2), as proved in Grossberg (1973).

Habituative transmitter gates

Habituative transmitters such as S mi in (Eq. 2) tend to obey equations of the following general form (Grossberg 1968b, 1972, 1980):

ddtNmi=0.51Nmi2.5fNNiNmi. 5

The amount of neurotransmitter N mi in (Eq. 5) accumulates, scaled by a factor of 0.5, up to a limit of 1 due to the accumulation term 1 − N mi, and is inactivated, or habituates, by the gated release term − 2.5f N(N i)N mi, whereby N mi is inactivated by mass action at a rate proportional to the product of an excitatory signalf N(N i) from either sensory cortex (Eq. 2) or orbitofrontal cortex (Eq. 7), and the amount N mi of available transmitter. These modulators are similar to those in the habituative transmitter spectrum for hippocampal cells (Eq. 22).

In particular, S mi in (Eq. 2) obeys:

ddtSmi=0.51Smi2.5Ii+fSSi1+OiSmi. 6

S mi accumulates up to a limit of 1 due to the accumulation term 0.5(1 − S mi), and is inactivated by mass action at a rate proportional to the product of (I i + f S(S i)(1 + O i), the excitatory term in Eq. 2 that the transmitter gates, and the amount of available transmitter S mi. A similar transmitter equation acts within orbitofrontal cortex (Eq. 13).

Orbitofrontal cortex, category learning, and incentive motivational learning

Orbitofrontal cortical dynamics

The activity O i of the ith orbitofrontal cortical cell (population) obeys:

ddtOi=10Oi+βO2OifSSi+0.030.0625wSiAwAi+10HwHi+800BCi+0.75OiOmi10OikiOk 7

In (7), a phasic input from sensory cortex (f S(S i), Eq. 2), plus a tonic activity of 0.03 (see f S(S i) + 0.03), is modulated by inputs from the amygdala (A, Eq. 14), hippocampus (H, Eq. 16), and orbitofrontal BDNF (B Oi, Eq. 12). In addition, a recurrent self-excitatory feedback signal (O i) supports persistence of orbitofrontal activity after the external sensory input is turned off and f S(S i) decays to 0. As in Eq. 2, there is a passive decay term − 10O i, an excitatory shunting on-center term β O(2 − O i)((f S(S i) + 0.03)0.0625w Si(Aw Ai + 10Hw Hi + 800B Oi) + 0.75O i)O mi that can increase up to 2, its saturation point, an activity-dependent habituative transmitter gate O mi of excitatory cortical interactions (Eq. 7), and a shunting off-surround inhibitory term 10Oikiok that enables contrast normalization. Adaptive weights, or LTM traces, w Si, w Ai, and w Hi (see Eqs. 8, 9, 10, and 11) gate the inputs f S(S i), A, and H, respectively. An excitatory gain of 10 multiplies H and of 800 multiplies B Oi.

Cortical category learning and incentive motivational learning

The learned adaptive weights to the orbitofrontal cortex all obey an outstar learning law (Grossberg, 1980), as described above. The weights from amygdala and hippocampus (w Ai and w Hi, respectively) supply incentive motivational support for cortico-cortical category learning by w Si. All weights obey the general form:

ddtwMi=4fMMi+BOiwMi+2Oi, 8

where M = S, A, or H, depending on the context.

Learned adaptive weights from sensory cortex to orbitofrontal cortex obey:

ddtwSi=4fSSi+BOiwSi+2Oi, 9

where learning is gated on and off by a sampling signal f s(S i) + B Oi that is the sum of the sensory cortical signal f S(S i) (Eq. 4), and the orbitofrontal BDNFB Oi (Eq. 12).The sampling signal’s size determines the rate at which weight w Si approaches twice the orbitofrontal activity O i (Eq. 7) via term − w Si + 2O i.

Learned adaptive weights from amygdala to orbitofrontal cortex obey:

ddtwAi=40.1A+BOiwAi+2Oi 10

and from hippocampus to orbitofrontal cortex obey:

ddtwHi=40.5H+BOiwHi+2Oi. 11
Orbitofrontal BDNF

Orbitofrontal BDNF B Oi is time-averages hippocampal signals H that are gated by learned weights w Hi with an excitatory gain 3.125:

ddtBOi=BOi+3.125HwHi. 12
Habituative transmitter gates in orbitofrontal cortex

Activity-dependent habituative neurotransmitters, or postsynaptic sites, O mi that influence orbitofrontal cortical activity obey a specialized version of (Eq. 5):

ddtOmi=0.51Omi2.5fSSi+0.030.0625wSiAwAi+10HwHi+800BCi+0.75OiOmi, 13

that accumulates to a maximum value of 1 at rate 0.5 via term 0.5(1 − O mi), and habituates, or is inactivated, at rate − 2.5((f S(S i) + 0.03)0.0625w Si(Aw Ai + 10Hw Hi + 800B Ci) + 0.75O i) by the on-center input term in (Eq. 7).

Amygdala and conditioned reinforcer learning

Amygdala drive representation dynamics

The amygdala activity A of the drive representation obeys:

ddtA=20A+βA10AifSSiFi. 14

Activity A passively decays via term − 20A. Term βA10AifSSiFi describes the sum of excitatory signalsf S(S i)from the ith sensory representation to A, gated by the conditioned reinforcer adaptive weights F i (Eq. 15). This sum can increase A until it reaches the saturation term 10 that is determined by term (10 − A). Adaptive weightF i determines how well S i can activate A, and thus the extent to which the i th CS has become a conditioned reinforcer through learning. Because F i multipliesf S(S i), a large S i will have a negligible effect on A if F i is small, and a large effect on A if F i is large. The US LTM trace F 0 is fixed at a relatively large value to enable the US to activate A via S 0and to thereby drive conditioned reinforcer learning when a CS is also active. The CS LTM trace F 1 is initially set to one tenth of the US value to prevent the CS from significantly activating A before conditioning takes place.

Conditioned reinforcer learning

Each adaptive weight F 1 obeys an outstar learning law:

ddtF1=0.5fSSiF1+0.2A. 15

Learning by F 1 is turned on and off by the sampling signal 0.5f S(S i), whose size determines the rate at whichF 1 time-averages 0.2A. Activity F 1 can increase or decrease during learning, hence both long-term potentiation (LTP) and long-term depression (LTD) can occur. To represent the non-learned response to the US, F 0 is held constant at 0.5.

Hippocampus and adaptively timed learning

Adaptively-timed hippocampal learning

As noted above, the hippocampus delivers adaptively timed signals H to the orbitofrontal cortex that can maintain its activity for a duration that can span the trace interval; see Eq. 6. The hippocampus hereby activates an adaptively-timed incentive motivational pathway in cases when the amygdala cannot. The spectral timing process embodies several processing steps.

Adaptively-timed hippocampal activity

Activity H in the hippocampus obeys:

ddtH=15H+βH2H0.625R+0.5BH. 16

Term − 15H represents passive decay. The excitatory term is scaled by the excitatory gain β H and bounded by 2, due to the shunting term β H(2 − H). The two sources of excitatory input are the adaptively timed input R (Eq. 17) and the total BDNF input B H (Eq. 27), each with its own gain term.

Adaptively-timed population output signal

The adaptively timed signal R is a population response:

R=i,jhij 17

that sums over multiple individually timed signals

hij=8fxijyijzij 18

that are defined below. None of the signals h ij individually can accurately time the ISI between a CS and US. The entire population response in (Eq. 17) can do so using a “spectrum” of differently timed cells, leading to the term “spectral timing” for this kind of learning (Grossberg and Merrill, 1992, 1996; Grossberg and Schmajuk, 1989).

Activation spectrum

Model simulations use the simplest embodiment of spectrally-timed learning. A more detailed biochemical model is given using Ca++-modulated learning by a spectrum of metabotropic glutamate receptor (mGluR) cell sites in Fiala, Grossberg, and Bullock (1996), which shows how mGluR dynamics can span such long time intervals.

Spectrally timed learning can be initiated when an input signal f S(S i) (Eq. 4) from a sensory cortical representation (Eq. 2) activates a population of hippocampal cell sites with activities x ij that activate the next processing stage via sigmoidal signals:

fxij=xij80.018+xij8. 19

Activities x ij react at a spectrum of rates:

ddtxij=rjxij+1xijfSSi, 20

with rates r j ranging from 0.171 (fast) to 0.016 (slow) defined by:

rj=5.125/0.0125+15j+1, 21

for j = 1 to 20.

Habituative transmitter spectrum

Each spectral activation signal f(x ij) is gated by a habituative chemical transmitter, or postsynaptic response, y ij that obeys:

ddtyij=0.51yij10fxijyij. 22

As in Eq. 5, y ij accumulates to 1 via term (1 − y ij) at rate 0.5, and habituates, or inactivates, due to a mass action interaction with signal f(x ij), via the gated release term− 10f(x ij)y ij. The different rates r j that activate each x ij cause the habituative transmitters y ij to become habituated at different rates as well. The family of curvesy ij,j = 1, 2, …, 20, is called a habituation spectrum.

Gated signal spectrum and time cells

Each signal f(x ij)interacts with y ij via mass action to generate a net output signal from its population of cell sites that obeys:

gijfxijyij0.03+maxfxijyij0.03,0. 23

Each gated signal g ij has a different rate of growth and decay, thereby generating a unimodal function of time that achieves its maximum value M ij at time T ij, where T ij is an increasing function of j, and M ij is a decreasing function of j. Taken together, all the functions g ij define the gated signal spectrum in Fig. 11c. This timed spectrum is the basis of adaptively timed learning over an extended time interval that can range from hundreds of milliseconds to several seconds, with each g ij acting as the sampling signal for its part of the adaptively timed spectrum.

Spectral learning law

Each adaptive weight z ij in the spectrum obeys an outstar learning law:

ddtzij=2gijzij+2N. 24

In Eq. 24, g ij is a sampling signal that determines the rate with which z ij samples a transient Now Print signal 2N (Eq. 25) that is derived from amygdala activity A in Eq. 14. Each z ij changes by an amount that reflects the degree to which the curves g ij and N have simultaneously large values through time. If g ij is large when N is large, then z ij increases in size. If g ij is large when N is small, then z ij decreases in size. Since the different g ij peak at different times, each z ij responds to N to different degrees.

The Now Print signal N obeys:

N=AE0.04+maxAE0.04,0, 25

where E is a feedforward inhibitory interneuron that obeys:

ddtE=40E+A. 26

The inhibitory interneuronal activity E in (26) time-averages the amygdala activity A at rate 40. Its activity hereby lags behind that of A. The difference (A − E) in (25) may thus be activated by any sufficiently rapid increase in A. Either a US, or a CS that has become a conditioned reinforce, can cause such a rapid increase, and thereby activate N, and thus learning of any adaptive weight z ij whose sampling signal g ij is sufficiently large at such a time.

An important property of N is that it increases in amplitude, but not significantly in duration, in response to larger inputs A. Thus learning can be faster in response to stronger rewards, but the timing of a conditioned response does not significantly change, as in the data and our simulations thereof (Fig. 8).

Doubly-gated signal spectrum

Each long-term memory trace z ij learns to a different degree. Each z ij also gate the signals g ij in order to generate a twice-gated output signal h ij (Eq. 18) from each of the differently timed cell sites. Comparing the signals h ij in Fig. 11d with the g ij in Fig. 11c shows how adaptively timed learning changes the relative strength of each spectral output. When all the h ij are added together to generate the population output R in (Eq. 17), accurate adaptively timing is achieved.

Hippocampal BDNF

Production of hippocampal BDNFB H is a time average of 25 times its adaptively timed population signal R (Eq. 17), scaled by a reaction rate of 2:

ddtBH=2BH+25R. 27

Hippocampal BDNF in the model extends hippocampal activation, and thus the incentive motivational support that it supplies to cortico-cortical learning during a memory consolidation period after the CS and US inputs terminate.

Pontine nuclei

Final common path for conditioned output

Output signals from the amygdala A (Eq. 14) and the CS-activated orbitofrontal cortical representation O 1 (Eq. 7) to the pons combine to form a common final path that is used in the model as a signal that generates a behavioral CR further downstream:

P=A+O1. 28

Acknowledgments

Daniel Franklin and Stephen Grossberg were supported in part by CELEST, an NSF Science of Learning Center (NSF SBE-0354378). Stephen Grossberg was also supported in part by the SyNAPSE program of DARPA (HR0011-09-C-0001).

Compliance of ethical standards

Conflict of interest

The authors declare that they have no competing financial interests.

References

  1. Abbott, L. F., Varela, K., Sen, K., & Nelson, S. B. (1997). Synaptic depression and cortical gain control. Science, 275, 220–223 [DOI] [PubMed]
  2. Akase E, Alkon DL, Disterhoft JF. Hippocampal lesions impair memory of short-delay conditioned eyeblink in rabbits. Behavioral Neuroscience. 1989;103:935–943. doi: 10.1037/0735-7044.103.5.935. [DOI] [PubMed] [Google Scholar]
  3. Aggleton JP. The contribution of the amygdala to normal and abnormal emotional states. Trends in Neurosciences. 1993;16:328–333. doi: 10.1016/0166-2236(93)90110-8. [DOI] [PubMed] [Google Scholar]
  4. Aggleton JP, Saunders RC. The amygdala— what’s happened in the last decade? In: Aggleton JP, editor. The Amygdala. 2. New York: Oxford University Press; 2000. pp. 1–30. [Google Scholar]
  5. Albouy G, King BR, Maquet P, Doyon J. Hippocampus and striatum: dynamics and interaction during acquisition and sleep-related motor sequence memory consolidation. Hippocampus. 2013;23:985–1004. doi: 10.1002/hipo.22183. [DOI] [PubMed] [Google Scholar]
  6. Anagnostaras SG, Maren S, Fanselow MS. Temporally graded retrograde amnesia of contextual fear after hippocampal damage in rats: Within-subjects examination. Journal of Neuroscience. 1999;19:1106–1114. doi: 10.1523/JNEUROSCI.19-03-01106.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Atallah HE, Frank MJ, O’Reilly RC. Hippocampus, cortex, and basal ganglia: Insights from computational models of complementary learning systems. Neurobiology of Learning and Memory. 2004;82:253–267. doi: 10.1016/j.nlm.2004.06.004. [DOI] [PubMed] [Google Scholar]
  8. Autry AE, Monteggia LM. Brain-Derived Neurotrophic Factor and Neuropsychiatric Disorders. Pharmacological Reviews. 2012;64:238–258. doi: 10.1124/pr.111.005108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Baloch A, Waxman A. Visual learning, adaptive expectations, and behavioral conditioning of the mobile robot MAVIN. Neural Networks. 1991;4:271–302. doi: 10.1016/0893-6080(91)90067-F. [DOI] [Google Scholar]
  10. Bar M, Aminoff E, Schacter DL. Scenes unseen: The parahippocampal cortex intrinsically subserves contextual associations, not scenes or places per se. The Journal of Neuroscience. 2008;28:8539–8544. doi: 10.1523/JNEUROSCI.0987-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Barbas H. Anatomic basis of cognitive-emotional interactions in the primate prefrontal cortex. Neuroscience and Biobehavioral Reviews. 1995;19:499–510. doi: 10.1016/0149-7634(94)00053-4. [DOI] [PubMed] [Google Scholar]
  12. Barbas H. Flow of information for emotions through temporal and orbitofrontal pathways. Journal of Anatomy. 2007;211:237–249. doi: 10.1111/j.1469-7580.2007.00777.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Baxter MG, Parker A, Lindner CCC, Izquierdo AD, Murray EA. Control of response selection by reinforcer value requires interaction of amygdala and orbital prefrontal cortex. Journal of Neuroscience. 2000;20:4311–4319. doi: 10.1523/JNEUROSCI.20-11-04311.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Bayley PJ, Frascino JC, Squire LR. Robust habit learning in the absence of awareness and independent of the medial temporal lobe. Nature Letters. 2005;436:550–553. doi: 10.1038/nature03857. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Blankenship, M.R., Huckfeld, R., Steinmetz, J.J., & Steinmetz, J.E. (2005). The effects of amygdala lesions on hippocampal activity and classical eyeblink conditioning in rats. Brain Research, 1035, 120–130. [DOI] [PubMed]
  16. Bechara A, Tranel D, Damasio H, Adolphs R, Rockland C, Damasio AR. Double dissociation of conditioning and declarative knowledge relative to the amygdala and hippocampus in humans. Science. 1995;269:1115–1118. doi: 10.1126/science.7652558. [DOI] [PubMed] [Google Scholar]
  17. Berger TW. Long-term potentiation of hippocampal synaptic transmission affects rates of behavioral learning. Science. 1984;224:627–630. doi: 10.1126/science.6324350. [DOI] [PubMed] [Google Scholar]
  18. Berger TW, Clark GA, Thompson RF. Learning-dependent neuronal responses recorded from limbic system brain structures during classical conditionings. Physiological Psychology. 1980;8:155–167. doi: 10.3758/BF03332846. [DOI] [Google Scholar]
  19. Berger TW, Laham RI, Thompson RF. Hippocampal unit-behavior correlations during classical conditioning. Brain Research. 1980;193:229–248. doi: 10.1016/0006-8993(80)90960-9. [DOI] [PubMed] [Google Scholar]
  20. Berger TW, Thompson RF. Neuronal plasticity in the limbic system during classical conditioning of the rabbit nictitating membrane response. I. The hippocampus. Brain Research. 1978;145:323–346. doi: 10.1016/0006-8993(78)90866-1. [DOI] [PubMed] [Google Scholar]
  21. Berger TW, Weikart CL, Basset JL, Orr WB. Lesions of the retrosplenial cortex produce deficits in reversal learning of the rabbit nictitating membrane response: implications for potential interactions between hippocampal and cerebellar brain systems. Behavioral Neuroscience. 1986;100:802–809. doi: 10.1037/0735-7044.100.6.802. [DOI] [PubMed] [Google Scholar]
  22. Berry SD, Thompson RF. Medial septal lesions retard classical conditioning of the nictitating membrane response of rabbits. Science. 1979;205:2009–2010. doi: 10.1126/science.451592. [DOI] [PubMed] [Google Scholar]
  23. Beylin AV, Gandhi CC, Wood GE, Talk AC, Matzel LD, Shors TJ. The Role of the Hippocampus in Trace Conditioning: Temporal Discontinuity or Task Difficulty? Neurobiology of Learning and Memory. 2001;76:447–461. doi: 10.1006/nlme.2001.4039. [DOI] [PubMed] [Google Scholar]
  24. Birnbaum, S. G., Yuan, P. X., Wang, M., Vijayraghavan, S., Bloom, A. K., Davis, D. J., … Arnsten, A. F. (2004). Protein kinase C overactivity impairs prefrontal cortical regulation of working memory. Science, 306, 882–884. [DOI] [PubMed]
  25. Blair HT, Sotres-Bayon F, Moiya MAP, LeDoux JE. The lateral amygdala processes the value of conditioned and unconditioned aversive stimuli. Neuroscience. 2005;133:561–569. doi: 10.1016/j.neuroscience.2005.02.043. [DOI] [PubMed] [Google Scholar]
  26. Blumer D, Benson DF. Personality changes with frontal lobe lesions. In: Benson DF, Blumer D, editors. Psychiatric Aspects of Neurological Disease. New York: Grune & Stratton; 1975. pp. 151–170. [Google Scholar]
  27. Bonhoffer T. Neurotrophins and activity-dependent development of the neocortex. Current Opinion in Neurobiology. 1996;6:119–126. doi: 10.1016/S0959-4388(96)80017-1. [DOI] [PubMed] [Google Scholar]
  28. Bower GH. Mood and memory. American Psychologist. 1981;36:129–148. doi: 10.1037/0003-066X.36.2.129. [DOI] [PubMed] [Google Scholar]
  29. Brockmole JR, Castelhano MS, Henderson JM. Contextual cueing in naturalistic scenes: Global and local contexts. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2006;32:699–706. doi: 10.1037/0278-7393.32.4.699. [DOI] [PubMed] [Google Scholar]
  30. Brown J, Bullock D, Grossberg S. How the basal ganglia use parallel excitatory and inhibitory learning pathways to selectively respond to unexpected rewarding cues. Journal of Neuroscience. 1999;19:10502–10511. doi: 10.1523/JNEUROSCI.19-23-10502.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Brown J, Bullock D, Grossberg S. How laminar frontal cortex and basal ganglia circuits interact to control planned and reactive saccades. Neural Networks. 2004;17:471–510. doi: 10.1016/j.neunet.2003.08.006. [DOI] [PubMed] [Google Scholar]
  32. Brown MW, Aggleton JP. Recognition memory: What are the roles of the perirhinal cortex and hippocampus? Nature Reviews Neuroscience. 2001;2:51–61. doi: 10.1038/35049064. [DOI] [PubMed] [Google Scholar]
  33. Buchanan SL, Thompson RH. Mediodorsal thalamic lesions and Pavlovian conditioning of heart rate and eyeblink responses in the rabbit. Behavioral Neuroscicience. 1990;104:912–918. doi: 10.1037/0735-7044.104.6.912. [DOI] [PubMed] [Google Scholar]
  34. Büchel C, Dolan RJ, Armony JL, Friston KJ. Amygdala-hippocampal involvement in human aversive trace conditioning revealed through event-related functional magnetic resonance imaging. Journal of Neuroscience. 1999;19:10869–10876. doi: 10.1523/JNEUROSCI.19-24-10869.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Bullock D, Cisek P, Grossberg S. Cortical networks for control of voluntary arm movements under variable force conditions. Cerebral Cortex. 1998;8:48–62. doi: 10.1093/cercor/8.1.48. [DOI] [PubMed] [Google Scholar]
  36. Buzsáki G, Chrobak JJ. Synaptic plasticity and self-organization in the hippocampus. Nature Neuroscience. 2005;8:1418–1420. doi: 10.1038/nn1105-1418. [DOI] [PubMed] [Google Scholar]
  37. Buzsáki G, Llinás R, Singer W, Berthoz A, Christen Y, editors. Temporal Coding in the Brain. Berlin: Springer-Verlag; 1994. [Google Scholar]
  38. Cabelli RJ, Hohn A, Shatz CJ. Inhibition of ocular dominance column formation by infusion of NT-4/5 or BDNF. Science. 1995;267:1662–1666. doi: 10.1126/science.7886458. [DOI] [PubMed] [Google Scholar]
  39. Cahill L, McGaugh JL. Amygdaloid complex lesions differentially affect retention of tasks using appetitive and aversive reinforcement. Behavioral Neuroscience. 1990;104:532–543. doi: 10.1037/0735-7044.104.4.532. [DOI] [PubMed] [Google Scholar]
  40. Carpenter GA, Grossberg S. A massively parallel architecture for a self-organizing neural pattern recognition machine. Computer Vision, Graphics, and Image Processing. 1987;37:54–115. doi: 10.1016/S0734-189X(87)80014-2. [DOI] [Google Scholar]
  41. Carpenter GA, Grossberg S. ART 3: Hierarchical search using chemical transmitters in self- organizing pattern recognition architectures. Neural Networks. 1990;3:129–152. doi: 10.1016/0893-6080(90)90085-Y. [DOI] [Google Scholar]
  42. Carpenter GA, Grossberg S. Pattern Recognition by Self-Organizing Neural Networks. Cambridge, MA: MIT Press; 1991. [Google Scholar]
  43. Carpenter GA, Grossberg S. Normal and amnesic learning, recognition and memory by a neural model of cortico-hippocampal interactions. Trends in Neurosciences. 1993;16:131–137. doi: 10.1016/0166-2236(93)90118-6. [DOI] [PubMed] [Google Scholar]
  44. Chang C, Gaudiano P. Application of biological learning theories to mobile robot avoidance and approach behaviors. Journal of Complex Systems. 1998;1:79–114. doi: 10.1142/S0219525998000065. [DOI] [Google Scholar]
  45. Chang, H.-C., Grossberg, S., and Cao, Y. (2014) Where's Waldo? How perceptual cognitive, and emotional brain processes cooperate during learning to categorize and find desired objects in a cluttered scene. Frontiers in Integrative Neuroscience, doi:10.3389/fnint.2014.0043. [DOI] [PMC free article] [PubMed]
  46. Chau LS, Galvez R. Amygdala’s involvement in facilitating associative learning-induced plasticity: a promiscuous role for the amygdala in memory acquisition. Frontiers in Integrative Neuroscience. 2012;6:92. doi: 10.3389/fnint.2012.00092. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Chen, C., Kano, M., Abeliovich, A., Chen, L., Bao, S., Kim, J. J., … Tonegawa, S. (1995). Impaired motor coordination correlates with persistent multiple climbing fiber innervation in PKC-gama mutant mice. Cell, 83, 1233–1242. [DOI] [PubMed]
  48. Chen G, Kolbeck R, Barde Y-A, Bonhoeffer T, Kossel A. Relative contribution of endogenous neurotrophins in hippocampal long-term potentiation. Journal of Neuroscience. 1999;19:7983–7990. doi: 10.1523/JNEUROSCI.19-18-07983.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Christian KM, Thompson RF. Neural substrates of eyeblink conditioning: Acquisition and retention. Learning & Memory. 2003;11:427–455. doi: 10.1101/lm.59603. [DOI] [PubMed] [Google Scholar]
  50. Choi J-S, Brown TH. Central amygdala lesions block ultrasonic vocalization and freezing as conditional but not unconditional responses. Journal of Neuroscience. 2003;23:8713–8721. doi: 10.1523/JNEUROSCI.23-25-08713.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Chun MM. Contextual cueing of visual attention. Trends in Cognitive Sciences. 2000;4:170–178. doi: 10.1016/S1364-6613(00)01476-5. [DOI] [PubMed] [Google Scholar]
  52. Chun MM, Jiang Y. Contextual cueing: implicit learning and memory of visual context guides spatial attention. Cognitive Psychology. 1998;36:28–71. doi: 10.1006/cogp.1998.0681. [DOI] [PubMed] [Google Scholar]
  53. Clark RE, Broadbent NJ, Zola SM, Squire LR. Anterograde amnesia and temporally graded amnesia for a nonspatial memory task after lesions of hippocampus and subiculum. Journal of Neuroscience. 2002;22:4663–4669. doi: 10.1523/JNEUROSCI.22-11-04663.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Clark RE, Manns JR, Squire LR. Trace and delay eyeblink conditioning: contrasting phenomena of declarative and nondeclarative memory. Psychological Science. 2001;12:304–308. doi: 10.1111/1467-9280.00356. [DOI] [PubMed] [Google Scholar]
  55. Clark RE, Squire LR. Classical conditioning and brain systems: The role of awareness. Science. 1998;280:77–81. doi: 10.1126/science.280.5360.77. [DOI] [PubMed] [Google Scholar]
  56. Clark RE, Squire LR. The importance of awareness for eyeblink conditioning is conditional: Theoretical comment on Bellebaum and Daum (2004) Behavioral Neuroscience. 2004;118:1466–1468. doi: 10.1037/0735-7044.118.6.1466. [DOI] [PubMed] [Google Scholar]
  57. Clark RE, Squire LR. An animal model of recognition memory and medial temporal lobe amnesia: History and current issues. Neuropsychologia. 2010;48:2234–2244. doi: 10.1016/j.neuropsychologia.2010.02.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Clark RE, Zola S. Trace eyeblink classical conditioning in the monkey: a nonsurgical method and behavioral analysis. Behavioral Neuroscience. 1998;5:1062–1068. doi: 10.1037/0735-7044.112.5.1062. [DOI] [PubMed] [Google Scholar]
  59. Contreras-Vidal JL, Grossberg S, Bullock D. A neural model of cerebellar learning for arm movement control: Cortico-spinal-cerebellar dynamics. Learning and Memory. 1997;3:475–502. doi: 10.1101/lm.3.6.475. [DOI] [PubMed] [Google Scholar]
  60. Cotman CW, Berchtold NC. Exercise: a behavioral intervention to enhance brain health and plasticity. Trends in Neurosciences. 2002;25:295–301. doi: 10.1016/S0166-2236(02)02143-4. [DOI] [PubMed] [Google Scholar]
  61. Cousens G, Otto T. Long-term potentiation and its transient suppression in the rhinal cortices induced by theta-related stimulation of hippocampal field CA1. Brain Research. 1998;780:95–101. doi: 10.1016/S0006-8993(97)01151-7. [DOI] [PubMed] [Google Scholar]
  62. Damasio A. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt Brace; 1999. [Google Scholar]
  63. Damasio AR, Tranel D, Damasio H. Somatic markers and the guidance of behavior: theory and preliminary testing. In: Levin HS, Eisenberg HM, Benton AL, editors. Frontal Lobe Function and Dysfunction. Oxford: Oxford University Press; 1991. pp. 217–229. [Google Scholar]
  64. Daum I, Schugens MM, Breitenstein C, Topka H, Spieker S. Classical eyeblink conditioning in Parkinson’s Disease. Movement Disorders. 1996;11:639–646. doi: 10.1002/mds.870110608. [DOI] [PubMed] [Google Scholar]
  65. Davis GW, Murphy RK. Long-term regulation of short-term transmitter release properties: retrograde signaling and synaptic development. Trends in Neurosciences. 1994;17:9–13. doi: 10.1016/0166-2236(94)90028-0. [DOI] [PubMed] [Google Scholar]
  66. Davis M. The role of the amygdala in emotional learning. International Review of Neurobiology. 1994;36:225–265. doi: 10.1016/S0074-7742(08)60305-0. [DOI] [PubMed] [Google Scholar]
  67. Deadwyler SA, West MO, Lynch G. Activity of dentate granule cells during learning: Differentiation of perforant path inputs. Brain Research. 1979;169:29–43. doi: 10.1016/0006-8993(79)90371-8. [DOI] [PubMed] [Google Scholar]
  68. Deadwyler SA, West MO, Robinson JH. Entorhinal and septal inputs differentially control sensory-evoked responses in the rat dentate gyrus. Science. 1981;211:1181–1183. doi: 10.1126/science.7466392. [DOI] [PubMed] [Google Scholar]
  69. Desimone, R. (1991). Face-selective cells in the temporal cortex of monkeys. Journal of Cognitive Neuroscience, 3, 1.8. [DOI] [PubMed]
  70. Desimone R. Visual attention mediated by biased competition in extrastriate visual cortex. Philosophical Transactions of the Royal Society of London. 1998;353:1245–1255. doi: 10.1098/rstb.1998.0280. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Destexhe A, Contreras D, Steriade M. Mechanisms underlying the synchronizing action of corticothalamic feedback through inhibition of thalamic relay cells. Journal of Neurophysiology. 1998;79:999–1016. doi: 10.1152/jn.1998.79.2.999. [DOI] [PubMed] [Google Scholar]
  72. Dranias MR, Grossberg S, Bullock D. Dopaminergic and non-dopaminergic value systems in conditioning and outcome-specific revaluation. Brain Research. 2008;1238:239–287. doi: 10.1016/j.brainres.2008.07.013. [DOI] [PubMed] [Google Scholar]
  73. Eichenbaum H, Lipton PA. Towards a functional organization of themedial temporal lobe memory system: role of the parahippocampal and medial entorhinal cortical areas. Hippocampus. 2008;18:1314–1324. doi: 10.1002/hipo.20500. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Engel AK, Fries P, Singer W. Dynamic predictions: Oscillations and synchrony in top-down processing. Nature Reviews Neuroscience. 2001;2:704–716. doi: 10.1038/35094565. [DOI] [PubMed] [Google Scholar]
  75. Epstein RA, Parker WE, Feiler AM. Where am I now? Distinct roles for parahippocampal and retrosplenial cortices in place recognition. The Journal of Neuroscience. 2007;27:6141–6149. doi: 10.1523/JNEUROSCI.0799-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Evarts EV. Motor cortex reflexes associated with learned movement. Science. 1973;179:501–503. doi: 10.1126/science.179.4072.501. [DOI] [PubMed] [Google Scholar]
  77. Everitt BJ, Cardinal RN, Hall J, Parkinson JA, Robbins TW. Differential involvement of amygdala subsystems in appetitive conditioning and drug addiction. In: Aggleton JP, editor. The Amygdala. 2. New York: Oxford University Press; 2000. pp. 335–390. [Google Scholar]
  78. Fahle M, Edelman S, Poggio T. Fast perceptual learning in hyperacuity. Vision Research. 1995;35:3003–3013. doi: 10.1016/0042-6989(95)00044-Z. [DOI] [PubMed] [Google Scholar]
  79. Felleman DJ, van Essen CD. Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex. 1991;1:1–47. doi: 10.1093/cercor/1.1.1. [DOI] [PubMed] [Google Scholar]
  80. Fiala J, Grossberg S, Bullock D. Metabotropic glutamate receptor activation in cerebellar Purkinje cells as substrate for adaptive timing of the classically conditioned eye-blink response. Journal of Neuroscience. 1996;16:3760–3774. doi: 10.1523/JNEUROSCI.16-11-03760.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Flores LC, Disterhoft JF. Caudate nucleus is critically involved in trace eyeblink conditioning. Journal of Neuroscience. 2009;29:14511–14520. doi: 10.1523/JNEUROSCI.3119-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Frankland PW, Bontempi B. The organization of recent and remote memories. Nature Reviews Neuroscience. 2005;6:119–130. doi: 10.1038/nrn1607. [DOI] [PubMed] [Google Scholar]
  83. Franklin, D.J., & Grossberg, S. (2005). A neural model of normal and amnesic learning and memory: Conditioning, adaptive timing, neurotrophins, and hippocampus. First Annual Conference on Computational Cognitive Neuroscience (CCN), Washington DC.
  84. Franklin, D.J., & Grossberg, S. (2008). Cognitive-emotional learning by neocortex, amygdala, and hippocampus: Timing, neurotrophins, amnesia, and consciousness. Proceedings of the twelfth international conference on cognitive and neural systems (ICCNS), Boston University.
  85. Freeman JH, Jr, Muckler AS. Developmental changes in eyeblink conditioning and neuronal activity in the pontine nuclei. Learning & Memoyr. 2003;10:337–345. doi: 10.1101/lm.63703. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Friedman HV, Bressler T, Garner CC, Ziv NE. Assembly of new individual excitatory synapses: Time course and temporal order of synaptic molecule recruitment. Neuron. 2000;27:57–69. doi: 10.1016/S0896-6273(00)00009-X. [DOI] [PubMed] [Google Scholar]
  87. Fulton JF. Frontal Lobotomy and Affective Behavior. New York: Norton; 1950. [Google Scholar]
  88. Fuster JM. The Prefrontal Cortex. 2. New York: Raven Press; 1989. [Google Scholar]
  89. Gabreil M, Sparenborg SP, Stolar N. Hippocampal control of cingulate cortical and anterior thalamic information processing during learning in rabbits. Experimental Brain Research. 1987;67:131–52. doi: 10.1007/BF00269462. [DOI] [PubMed] [Google Scholar]
  90. Gabrieli JD, McGlinchey-Berroth R, Carrillo MC, Gluck MA, Cermak LS, Disterhoft JF. Intact delay-eyeblink classical conditioning in amnesia. Behavioral Neuroscience. 1995;109:819–827. doi: 10.1037/0735-7044.109.5.819. [DOI] [PubMed] [Google Scholar]
  91. Ganguly K, Kiss L, Poo M-m. Enhancement of presynaptic neural excitability by correlated presynaptic and postsynaptic spiking. Nature Neuroscience. 2000;3:1018–1026. doi: 10.1038/79838. [DOI] [PubMed] [Google Scholar]
  92. Garrud P, Rawlins JNP, Mackintosh NJ, Godall G, Cotton MM, Feldon J. Successful overshadowing and blocking in hippocampectomized rats. Behavioural Brain Reserch. 1984;12:39–53. doi: 10.1016/0166-4328(84)90201-8. [DOI] [PubMed] [Google Scholar]
  93. Gaudiano, P., & Chang, C. (1997). Adaptive obstacle avoidance with a neural network for operant conditioning: Experiments with real robots. Proceedings of the 1997 I.E. International Symposium on Computational Intelligence in Robotics and Automation, 13–18.
  94. Gaudiano P, Zalama E, Chang C, Lopez-Coronado J, et al. A model of operant conditioning for adaptive obstacle avoidance. In: Maes P, et al., editors. From Animals to Animats 4. Proceedings of the fourth International Conference on Simulation of Adaptive Behavior. Cambridge, MA: MIT Press; 1996. pp. 373–381. [Google Scholar]
  95. Gibbon J. Scalar expectancy and Weber's law in animal timing. Psychological Review. 1977;84:279–325. doi: 10.1037/0033-295X.84.3.279. [DOI] [Google Scholar]
  96. Gilbert PFC, Thach WT. Purkinje cell activity during motor learning. Brain Research. 1977;128:309–328. doi: 10.1016/0006-8993(77)90997-0. [DOI] [PubMed] [Google Scholar]
  97. Gloor P, Olivier A, Quesney LF, Andermann F, Horowitz S. The role of the limbic system in experiential phenomena of temporal lobe epilepsy. Annals of Neurology. 1982;12:129–144. doi: 10.1002/ana.410120203. [DOI] [PubMed] [Google Scholar]
  98. Gnadt W, Grossberg S. SOVEREIGN: An autonomous neural system for incrementally learning planned action sequences to navigate towards a rewarded goals. Technical Report CAS/CNS-TR-2007-015. Neural Networks. 2008;21:699–758. doi: 10.1016/j.neunet.2007.09.016. [DOI] [PubMed] [Google Scholar]
  99. Gochin PM, Miller EK, Gross CG, Gerstein GL. Functional interactions among neurons in inferior temporal cortex of the awake macaque. Experimental Brain Research. 1991;84:505–516. doi: 10.1007/BF00230962. [DOI] [PubMed] [Google Scholar]
  100. Gorchetchnikov A, Grossberg S. Space, time, and learning in the hippocampus: How fine spatial and temporal scales are expanded into population codes for behavioral control. Neural Networks. 2007;20:182–193. doi: 10.1016/j.neunet.2006.11.007. [DOI] [PubMed] [Google Scholar]
  101. Gorski JA, Zeiler SR, Tamowski S, Jones KR. Brain-derived neurotrophic factor is required for the maintenance of cortical dendrites. Journal of Neuroscience. 2003;23:6856–6865. doi: 10.1523/JNEUROSCI.23-17-06856.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Grant DA. Cognitive factors in eyelid conditioning. Psychophysiology. 1973;10:75–81. doi: 10.1111/j.1469-8986.1973.tb01086.x. [DOI] [PubMed] [Google Scholar]
  103. Green JT, Woodruff-Pak DS. Eyeblink classical conditioning: Hippocampus is for multiple associations as cerebellum is for association-response. Psychological Bulletin. 2000;126:138–158. doi: 10.1037/0033-2909.126.1.138. [DOI] [PubMed] [Google Scholar]
  104. Grossberg S. Some nonlinear networks capable of learning a spatial pattern of arbitrary complexity. Proceedings of the National Academy of Sciences. 1968;59:368–372. doi: 10.1073/pnas.59.2.368. [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Grossberg S. Some physiological and biochemical consequences of psychological postulates. Proceedings of the National Academy of Sciences. 1968;60:758–765. doi: 10.1073/pnas.60.3.758. [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Grossberg S. On learning and energy-entropy dependence in recurrent and nonrecurrent signed networks. Journal of Statistical Physics. 1969;1:319–350. doi: 10.1007/BF01007484. [DOI] [Google Scholar]
  107. Grossberg S. On the dynamics of operant conditioning. Journal of Theoretical Biology. 1971;33:225–255. doi: 10.1016/0022-5193(71)90064-6. [DOI] [PubMed] [Google Scholar]
  108. Grossberg, S. (1972a). A neural theory of punishment and avoidance, I: Qualitative theory. Mathematical Biosciences, 15, 39-67.
  109. Grossberg S. A neural theory of punishment and avoidance, II: Quantitative theory. Mathematical Biosciences. 1972;15:253–285. doi: 10.1016/0025-5564(72)90038-7. [DOI] [Google Scholar]
  110. Grossberg, S. (1973). Contour enhancement, short-term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics, 52, 213-257.
  111. Grossberg, S. (1975). A neural model of attention, reinforcement, and discrimination learning. International Review of Neurobiology, 18, 263-327. [DOI] [PubMed]
  112. Grossberg S. Adaptive pattern classification and universal recoding, I: Parallel development and coding of neural feature detectors. Biological Cybernetics. 1976;23:121–134. doi: 10.1007/BF00344744. [DOI] [PubMed] [Google Scholar]
  113. Grossberg S. Adaptive pattern classification and universal recoding, II: Feedback, expectation, olfaction, and illusions. Biological Cybernetics. 1976;23:187–202. doi: 10.1007/BF00344744. [DOI] [PubMed] [Google Scholar]
  114. Grossberg S. How does a brain build a cognitive code? Psychological Review. 1980;87:1–51. doi: 10.1037/0033-295X.87.1.1. [DOI] [PubMed] [Google Scholar]
  115. Grossberg S. Processing of expected and unexpected events during conditioning and attention: A psychophysiological theory. Psychological Review. 1982;89:529–572. doi: 10.1037/0033-295X.89.5.529. [DOI] [PubMed] [Google Scholar]
  116. Grossberg, S. (1984). Some psychophysiological and pharmacological correlates of a developmental, cognitive, and motivational theory. In R. Karrer, J. Cohen, and P. Tueting (Eds.), Brain and information: Event related potentials. New York: New York Academy of Sciences, pp. 58-142. [DOI] [PubMed]
  117. Grossberg S. The Adaptive Brain. New York: Elsevier; 1987. [Google Scholar]
  118. Grossberg, S. (1988) Nonlinear neural networks: Principles, mechanisms, and architectures. Neural Networks, 1 , 17-61.
  119. Grossberg S. The link between brain learning, attention and consciousness. Consciousness and Cognition. 1999;8:1–44. doi: 10.1006/ccog.1998.0372. [DOI] [PubMed] [Google Scholar]
  120. Grossberg S. The complementary brain: unifying brain dynamics and modularity. Trends in Cognitive Sciences. 2000;4:233–245. doi: 10.1016/S1364-6613(00)01464-9. [DOI] [PubMed] [Google Scholar]
  121. Grossberg S. The imbalanced brain: From normal behavior to schizophrenia. Biological Psychiatry. 2000;48:81–98. doi: 10.1016/S0006-3223(00)00903-3. [DOI] [PubMed] [Google Scholar]
  122. Grossberg S. How does the cerebral cortex work? Development, learning, attention, and 3D vision by laminar circuits of visual cortex. Behavioral & Cognitive Neuroscience Reviews. 2003;2:47–76. doi: 10.1177/1534582303002001003. [DOI] [PubMed] [Google Scholar]
  123. Grossberg S. Consciousness CLEARS the mind. Neural Networks. 2007;20:1040–1053. doi: 10.1016/j.neunet.2007.09.014. [DOI] [PubMed] [Google Scholar]
  124. Grossberg S. Adaptive Resonance Theory: How a brain learns to consciously attend, recognize, and predict a changing world. Neural Networks. 2013;37:1–47. doi: 10.1016/j.neunet.2012.09.017. [DOI] [PubMed] [Google Scholar]
  125. Grossberg, S. (2016). Towards solving the hard problem of consciousness: The varieties of brain resonances and the conscious experiences that they support. Submitted for publication. [DOI] [PubMed]
  126. Grossberg S, Bullock D, Dranias M. Neural dynamics underlying impaired autonomic and conditioned responses following amygdala and orbitofrontal lesions. Behavioral Neuroscience. 2008;122:1100–1125. doi: 10.1037/a0012808. [DOI] [PubMed] [Google Scholar]
  127. Grossberg, S. and Gutowski, W.E. (1987). Neural dynamics of decision making under risk: Affective balance and cognitive-emotional interactions. Psychological Review, 94, 300-318. [PubMed]
  128. Grossberg, S., & Kazerounian, S. (2016). Phoneme restoration and empirical coverage of Interactive Activation and Adaptive Resonance models of human speech processing. Journal of the Acoustical Society of America, in press. [DOI] [PubMed]
  129. Grossberg S, Levine DS. Neural dynamics of attentionally modulated Pavlovian conditioning: Blocking, inter-stimulus interval, and secondary reinforcement. Applied Optics. 1987;26:5015–5030. doi: 10.1364/AO.26.005015. [DOI] [PubMed] [Google Scholar]
  130. Grossberg S, Merrill JWL. A neural networks model of adaptively timed reinforcement learning and hippocampal dynamics. Cognitive Brain Research. 1992;1:3–38. doi: 10.1016/0926-6410(92)90003-A. [DOI] [PubMed] [Google Scholar]
  131. Grossberg S, Merrill JWL. The hippocampus and cerebellum in adaptively timed learning, recognition, and movement. Journal of Cognitive Neuroscience. 1996;18:257–277. doi: 10.1162/jocn.1996.8.3.257. [DOI] [PubMed] [Google Scholar]
  132. Grossberg S, Paine RW. A neural model of corticocerebellar interactions during attentive imitation and predictive learning of sequential handwriting movements. Neural Networks. 2000;13:999–1046. doi: 10.1016/S0893-6080(00)00065-4. [DOI] [PubMed] [Google Scholar]
  133. Grossberg S, Pearson L. Laminar cortical dynamics of cognitive and motor working memory, sequence learning and performance: Toward a unified theory of how the cerebral cortex works. Psychological Review. 2008;115:677–732. doi: 10.1037/a0012618. [DOI] [PubMed] [Google Scholar]
  134. Grossberg S, Pilly PK. How Entorhinal Grid Cells May Learn Multiple Spatial Scales from a Dorsoventral Gradient of Cell Response Rates in a Self-organizing Map. PLoS Computational Biology. 2012;8(10):e1002648. doi: 10.1371/journal.pcbi.1002648. [DOI] [PMC free article] [PubMed] [Google Scholar]
  135. Grossberg S, Pilly PK. Coordinated learning of grid cell and place cell spatial and temporal properties: multiple scales, attention, and oscillations. Philosophical Transactions of the Royal Society B. 2014;369:20120524. doi: 10.1098/rstb.2012.0524. [DOI] [PMC free article] [PubMed] [Google Scholar]
  136. Grossberg S, Schmajuk NA. Neural dynamics of attentionally-modulated Pavlovian conditioning: Conditioned reinforcement, inhibition, and opponent processing. Psychobiology. 1987;15:195–240. [Google Scholar]
  137. Grossberg S, Schmajuk NA. Neural dynamics of adaptive timing and temporal discrimination during associative learning. Neural Networks. 1989;2:79–102. doi: 10.1016/0893-6080(89)90026-9. [DOI] [Google Scholar]
  138. Grossberg S, Seidman D. Neural dynamics of autistic behaviors: Cognitive, emotional, and timing substrates. Psychological Review. 2006;113:483–525. doi: 10.1037/0033-295X.113.3.483. [DOI] [PubMed] [Google Scholar]
  139. Grossberg S, Seitz A. Laminar development of receptive fields, maps, and columns in visual cortex: The coordinating role of the subplate. Cerebral Cortex. 2003;13:852–863. doi: 10.1093/cercor/13.8.852. [DOI] [PubMed] [Google Scholar]
  140. Grossberg S, Versace M. Spikes, synchrony, and attentive learning by laminar thalamocortical circuits. Brain Research. 2008;1218:278–312. doi: 10.1016/j.brainres.2008.04.024. [DOI] [PubMed] [Google Scholar]
  141. Halgren E, Walter RD, Cherlow DG, Crandall PH. Mental phenomena evoked by electrical stimulations of the human hippocampal formation and amygdala. Brain. 1978;101:83–117. doi: 10.1093/brain/101.1.83. [DOI] [PubMed] [Google Scholar]
  142. Halverson HE, Freeman JH. Medial auditory thalamic nuclei are necessary for eyeblink conditioning. Behavioral Neuroscience. 2006;120:880–887. doi: 10.1037/0735-7044.120.4.880. [DOI] [PMC free article] [PubMed] [Google Scholar]
  143. Halverson HE, Poremba A, Freeman JH. Medial auditory thalamus inactivation prevents acquisition and retention of eyeblink conditioning. Learning & Memory. 2008;15:532–538. doi: 10.1101/lm.1002508. [DOI] [PMC free article] [PubMed] [Google Scholar]
  144. Harries MH, Perrett DI. Visual processing of faces in temporal cortex: Physiological evidence for a modular organization and possible anatomical correlates. Journal of Cognitive Neuroscience. 1991;3:9–24. doi: 10.1162/jocn.1991.3.1.9. [DOI] [PubMed] [Google Scholar]
  145. Heldt SA, Stanek L, Chhatwal JP, Ressler KJ. Hippocampus-specific deletion of BDNF in adult mice impairs spatial memory and extinction of aversive memories. Molecular Psychiatry. 2007;12:655–670. doi: 10.1038/sj.mp.4001957. [DOI] [PMC free article] [PubMed] [Google Scholar]
  146. Hilgard ER, Campbell AA, Sears WN. Conditioned discrimination: Development with and without verbal report. American Journal of Psychology. 1937;49:564–580. doi: 10.2307/1416381. [DOI] [Google Scholar]
  147. Hobson JA, Pace-Schott EF. The cognitive neuroscience of sleep: neuronal systems, consciousness and learning. Nature Reviews Neuroscience. 2002;3:679–693. doi: 10.1038/nrn915. [DOI] [PubMed] [Google Scholar]
  148. Holland PC, Gallagher M. Amygdala circuitry in attentional and representational processes. Trends in Cognitive Science. 1999;3:65–73. doi: 10.1016/S1364-6613(98)01271-6. [DOI] [PubMed] [Google Scholar]
  149. Hodgkin, A.L. (1964). The Conduction of the Nervous Impulse. Springfield, IL: Charles. C. Thomas.
  150. Höistad M, Barbas H. Sequence of information processing for emotions through pathways linking temporal and insular cortices with the amygdala. NeuroImage. 2008;40:1016–1033. doi: 10.1016/j.neuroimage.2007.12.043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  151. Hu Y, Russek SJ. BDNF and the diseased nervous system: a delicate balance between adaptive and pathological processes of gene regulation. Journal of Neurochemistry. 2008;105:1–17. doi: 10.1111/j.1471-4159.2008.05237.x. [DOI] [PubMed] [Google Scholar]
  152. Huang T-R, Grossberg S. Cortical dynamics of contextually cued attentive visual learning and search: Spatial and object evidence accumulation. Psychological Review. 2010;117:1080–1112. doi: 10.1037/a0020664. [DOI] [PubMed] [Google Scholar]
  153. Ito M. The Cerebellum and Neural Control. New York: Raven Press; 1984. [Google Scholar]
  154. Ivkovich D, Thompson RH. Motor cortex lesions do not affect learning or performance of the eyeblink response in rabbits. Behavioral Neuroscience. 1997;111:727–738. doi: 10.1037/0735-7044.111.4.727. [DOI] [PubMed] [Google Scholar]
  155. James GO, Hardiman MJ, Yeo CH. Hippocampal lesions and trace conditioning in the rabbit. Behavioural Brain Research. 1987;23:109–116. doi: 10.1016/0166-4328(87)90048-9. [DOI] [PubMed] [Google Scholar]
  156. Ji D, Wilson MA. Coordinated memory replay in the visual cortex and hippocampus during sleep. Nature Neuroscience. 2007;10:100–107. doi: 10.1038/nn1825. [DOI] [PubMed] [Google Scholar]
  157. Jiang Y, Wagner LC. What is learned in spatial contextual cueing: Configuration or individual locations? Perception and Psychophysics. 2004;66:454–463. doi: 10.3758/BF03194893. [DOI] [PubMed] [Google Scholar]
  158. Kalaska JF, Cohen DAD, Hyde ML, Prud’homme MJ. A comparison of movement direction-related versus load direction-related activity in primate motor cortex using a two-dimensional reaching task. Journal of Neuroscience. 1989;9:2080–2102. doi: 10.1523/JNEUROSCI.09-06-02080.1989. [DOI] [PMC free article] [PubMed] [Google Scholar]
  159. Kali S, Dayan P. Off-line replay maintains declarative memories in a model of hippocampal-neocortical interactions. Nature Neuroscience. 2004;7:286–294. doi: 10.1038/nn1202. [DOI] [PubMed] [Google Scholar]
  160. Kalmbach BE, Ohyama T, Kreider JC, Riusech F, Mauk MD. Interactions between prefrontal cortex and cerebellum revealed by trace eyelid conditioning. Learning & Memory. 2009;16:86–95. doi: 10.1101/lm.1178309. [DOI] [PMC free article] [PubMed] [Google Scholar]
  161. Kaneko T, Thompson RF. Disruption of trace conditioning of the nictitating membrane response in rabbits by central cholinergic blockade. Psychopharmacology. 1997;131:161–166. doi: 10.1007/s002130050279. [DOI] [PubMed] [Google Scholar]
  162. Kang H, Schuman EM. Long-lasting neurotrophin-induced enhancement of synaptic transmission in the adult hippocampus. Science. 1995;267:1658–1662. doi: 10.1126/science.7886457. [DOI] [PubMed] [Google Scholar]
  163. Kang H, Welcher AA, Shelton D, Schuman EM. Neurotrophins and time: different roles for TrkB signaling in hippocampal long-term potentiation. Neuron. 1997;19:653–664. doi: 10.1016/S0896-6273(00)80378-5. [DOI] [PubMed] [Google Scholar]
  164. Kapp BS, Wilson A, Pascoe JP, Supple W, Whalen PJ. A neuroanatomical systems analysis of conditioned bradycardia in the rabbit. In: Gabriel M, Moore J, editors. Learning and Computational Neuroscience: Foundations of Adaptive Networks. Cambridge, MA: The MIT Press; 1990. pp. 53–90. [Google Scholar]
  165. Killcross AS, Everitt BJ, Robbins TW. Different types of fear-conditioned behaviour mediated by separate nuclei within amygdala. Nature. 1997;388:377–380. doi: 10.1038/41097. [DOI] [PubMed] [Google Scholar]
  166. Kim JJ, Clark RE, Thompson RF. Hippocampectomy impairs the memory of recently, but not remotely, acquired trace eyeblink conditioned responses. Behavioral Neuroscience. 1995;109:195–203. doi: 10.1037/0735-7044.109.2.195. [DOI] [PubMed] [Google Scholar]
  167. Kimble GA. Classical conditioning and the problem of awareness. Journal of Personality. 1962;30:27–45. doi: 10.1111/j.1467-6494.1962.tb01677.x. [DOI] [PubMed] [Google Scholar]
  168. Kalmbach BE, Ohyama T, Kreider JC, Riusech F, Mauk MD. Interactions between prefrontal cortex and cerebellum revealed by trace eyelid conditioning. Learning & Memory. 2009;16:86–95. doi: 10.1101/lm.1178309. [DOI] [PMC free article] [PubMed] [Google Scholar]
  169. Knipper M, de Pehna Berzaghi M, Blochl A, Breer H, Thoenen H, Lindholm D. Positive feedback between acetylcholine and the neurotrophins nerve growth factor and brain-derived neurotrophic factor in rat hippocampus. European Journal of Neuroscience. 1993;6:668–671. doi: 10.1111/j.1460-9568.1994.tb00312.x. [DOI] [PubMed] [Google Scholar]
  170. Kohara K, Kitamura A, Morishima M, Tsumoto T. Activity-dependent transfer of brain-derived neurotrophins to postsynaptic neurons. Science. 2001;291:2419–2423. doi: 10.1126/science.1057415. [DOI] [PubMed] [Google Scholar]
  171. Kokaia Z, Bengzon J, Metsis M, Kokaia M, Persson H, Lindvall O. Coexpression of neurotrophins and their receptors in neurons of the central nervous system. Proceedings of the National Academy of Sciences USA. 1993;90:6711–6715. doi: 10.1073/pnas.90.14.6711. [DOI] [PMC free article] [PubMed] [Google Scholar]
  172. Korte M, Carroll P, Wolf E, Brem G, Thoenen H, Bonhoffer T. Hippocampal long-term potentiation is impaired in mice lacking brain-derived neurotrophic factor. Proceedings of the National Academy of Sciences USA. 1995;92:8856–8860. doi: 10.1073/pnas.92.19.8856. [DOI] [PMC free article] [PubMed] [Google Scholar]
  173. Koyama R, Ikegaya Y. To BDNF or nor to BDNF: That is the epipelptic hippocampus. The Neuroscientist. 2005;11:282–287. doi: 10.1177/1073858405278266. [DOI] [PubMed] [Google Scholar]
  174. Kraljic T, Samuel AG. Generalization in perceptual learning for speech. Psychonomic Bulletin and Review. 2006;13:262–268. doi: 10.3758/BF03193841. [DOI] [PubMed] [Google Scholar]
  175. Kraus BJ, Robinson RJII, White JA, Eichenbaum H, Hasselmo ME. Hippocampal “time cells”: Time vs. path integration. Neuron. 2013;78:1090–1101. doi: 10.1016/j.neuron.2013.04.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  176. Kronforst-Collins MA, Disterhoft JF. Lesions of the caudal area of rabbit medial prefrontal cortex impair trace eyeblink conditioning. Neurobiology of Learning and Memory. 1998;69:147–162. doi: 10.1006/nlme.1997.3818. [DOI] [PubMed] [Google Scholar]
  177. LeDoux JE. Emotional memory systems in the brain. Behavioural Brain Research. 1993;58:69–79. doi: 10.1016/0166-4328(93)90091-4. [DOI] [PubMed] [Google Scholar]
  178. Lee T, Kim JJ. Differential effects of cerebellar, amygdalar, and hippocampal lesions on classical eyeblink conditioning in rats. Journal of Neuroscience. 2004;24:3242–3250. doi: 10.1523/JNEUROSCI.5382-03.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  179. Lehmann H, Treit D, Parent M. Amygdala lesions do not impair shock-probe avoidance retention performance. Behavioral Neuroscience. 2000;114:107–116. doi: 10.1037/0735-7044.114.1.107. [DOI] [PubMed] [Google Scholar]
  180. Lessmann V. Neurotrophin-dependent modulation of glutamatergic synaptic transmission in the mammalian CNS. General Pharmacology. 1998;31:667–674. doi: 10.1016/S0306-3623(98)00190-6. [DOI] [PubMed] [Google Scholar]
  181. Li H, Bandrowski AE, Prince DA. Cortical injury affects short-term plasticity of evoked excitatory synaptic currents. Journal of Neurophysiology. 2005;93:146–156. doi: 10.1152/jn.00665.2004. [DOI] [PubMed] [Google Scholar]
  182. Liddle PF. Volition and schizophrenia. In: David AS, Cutting JC, editors. The Neuropsychology of Scizophrenia. Hillsdale: Erlbaum Press; 1994. pp. 39–49. [Google Scholar]
  183. Little AH, Lipsitt LP, Rovee-Collier C. Classical conditioning and retention of the infant's eyelid response: Effects of age and interstimulus interval. Journal of Experimental Child Psychology. 1984;37(512–5):24. doi: 10.1016/0022-0965(84)90074-2. [DOI] [PubMed] [Google Scholar]
  184. Lleras A, von Mühlenen A. Spatial context and top-down strategies in visual search. Spatial Vision. 2004;17:465–482. doi: 10.1163/1568568041920113. [DOI] [PubMed] [Google Scholar]
  185. Llinas R, Ribary U, Joliot M, Wang XT. Content and context in temporal thalamocortical binding. In: Buzsáki G, Llinas R, Singer W, Berthoz A, Christen Y, editors. Temporal Coding in the Brain. Berlin: Springer-Verlag; 1994. pp. 251–272. [Google Scholar]
  186. Louie K, Wilson MA. Temporally structured replay of awake hippocampal ensemble activity during rapid eye movement sleep. Neuron. 2001;29:145–156. doi: 10.1016/S0896-6273(01)00186-6. [DOI] [PubMed] [Google Scholar]
  187. Macchi, G., & Rinvik, E. (1976). Thalmo-telencephalic circuits: A neuroanatomical survey. In A. Rémond (Ed.). Handbook of Electroencephalography and Clinical Neurophysiology (Vol. 2, Pt. A). Amsterdam: Elsevier.
  188. MacDonald CJ, Lepage KQ, Eden UT, Eichenbaum H. Hippocampal “time cells” bridge the gap in memory for discontiguous events. Neuron. 2011;71:737–749. doi: 10.1016/j.neuron.2011.07.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  189. Manns JR, Clark RE, Squire LR. Parallel acquisition of awareness and trace eyeblink classical conditioning. Learning & Memory. 2000;7:267–272. doi: 10.1101/lm.33400. [DOI] [PMC free article] [PubMed] [Google Scholar]
  190. Manns JR, Clark RE, Squire LR. Single-cue delay eyeblink conditioning is unrelated to awareness. Cognitive, Affective, & Behavioral Neuroscience. 2001;1:192–198. doi: 10.3758/CABN.1.2.192. [DOI] [PubMed] [Google Scholar]
  191. Maquet, P., Laureys, S., Peigneux, P., Fuchs, S., Petiau, C., Phillips, C., Cleeremans, A. (2000). Experience dependent changes in cerebral activation during human REM sleep. Nature Neuroscience, 3, 831–836. [DOI] [PubMed]
  192. Mauk MD, Ruiz BP. Learning-dependent timing of Pavolian eyelid responses: Differential conditioning using multiple interstimulus intervals. Behavioral Neuorscience. 1992;106:666–681. doi: 10.1037/0735-7044.106.4.666. [DOI] [PubMed] [Google Scholar]
  193. Mauk MD, Thompson RF. Retention of classically conditioned eyelid responses following acute decerebration. Brain Research. 1987;403:89–95. doi: 10.1016/0006-8993(87)90126-0. [DOI] [PubMed] [Google Scholar]
  194. McAllister WR, McAllister DE. Effect of knowledge of conditioning upon eyelid conditioning. Journal of Experimental Psychology. 1958;25:579–583. doi: 10.1037/h0045355. [DOI] [PubMed] [Google Scholar]
  195. McClelland JL, McNaughton BL, O’Reilly RC. Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychological Review. 1995;102:419–457. doi: 10.1037/0033-295X.102.3.419. [DOI] [PubMed] [Google Scholar]
  196. McEchron MD, Disterhoft JF. Sequence of single neuron changes in CA1 hippocampus of rabbits during acquisition of trace eyeblink conditioned responses. Journal of Neurophysiology. 1997;78:1030–1044. doi: 10.1152/jn.1997.78.2.1030. [DOI] [PubMed] [Google Scholar]
  197. McGaugh JL. Memory- a century of consolidation. Science. 2000;287:248–251. doi: 10.1126/science.287.5451.248. [DOI] [PubMed] [Google Scholar]
  198. McGaugh JL. Memory consolidation and the amygdala: A systems perspective. Trends in Neurosciences. 2002;25:456–461. doi: 10.1016/S0166-2236(02)02211-7. [DOI] [PubMed] [Google Scholar]
  199. McGlinchey-Berroth R, Brawn C, Disterhoft JF. Temporal discrimination learning in severe amnesic patients reveals an alteration in the timing of eyeblink conditioned responses. Behavioral Neuroscience. 1999;113:10–18. doi: 10.1037/0735-7044.113.1.10. [DOI] [PubMed] [Google Scholar]
  200. McGlinchey-Berroth R, Carrillo MC, Gabrieli JD, Brawn CM, Disterhoft JF. Impaired trace eyeblink conditioning in bilateral, medial-temporal lobe amnesia. Behavioral Neuroscience. 1997;111:873–882. doi: 10.1037/0735-7044.111.5.873. [DOI] [PubMed] [Google Scholar]
  201. McLaughlin J, Skaggs H, Churchwell J, Powell DA. Medial prefrontal cortex and Pavlovian conditioning: Trace versus delay conditioning. Behavioral Neuroscience. 2002;116:37–47. doi: 10.1037/0735-7044.116.1.37. [DOI] [PubMed] [Google Scholar]
  202. Medina JF, Repa JC, Mauk MD, LeDoux JE. Parallels between cerebellum-and amygdala-dependent conditioning. Neuroscience. 2002;3:122–313. doi: 10.1038/nrn728. [DOI] [PubMed] [Google Scholar]
  203. Mehta MR. Cortico-hippocampal interaction during up-down states and memory consolidation. Nature Neuroscience. 2007;10:13–15. doi: 10.1038/nn0107-13. [DOI] [PubMed] [Google Scholar]
  204. Mhatre H, Gorchetchnikov A, Grossberg S. Grid cell hexagonal patterns formed by fast self-organized learning within entorhinal cortex. Hippocampus. 2012;22:320–334. doi: 10.1002/hipo.20901. [DOI] [PubMed] [Google Scholar]
  205. Mishkin M. A memory system in the monkey. Philosophical Transactions of the Royal Society of London. B: Biological Sciences. 1982;298:85–95. doi: 10.1098/rstb.1982.0074. [DOI] [PubMed] [Google Scholar]
  206. Mishkin M. Cerebral memory circuits. In: Poggio TA, Glaser DA DA, editors. Exploring Brain Functions: Models in Neuroscience. New York: Wiley & Sons; 1993. pp. 113–125. [Google Scholar]
  207. Mishkin M, Aggleton J. Multiple functional contributions of the amygdala in the monkey. In: Ben-Ari Y, editor. The Amygdaloid Complex. Amsterdam: Elsevier; 1981. pp. 409–420. [Google Scholar]
  208. Mishkin M, Ungerleider LG, Macko KA. Object vision and spatial vision: Two cortical pathways. Trends in Neurosciences. 1983;6:414–417. doi: 10.1016/0166-2236(83)90190-X. [DOI] [Google Scholar]
  209. Molteni R, Wu A, Vaynman S, Ying Z, Barnard RJ, Gomez-Pinilla F. Exercise reverses the harmful effects of consumption of a high-fat diet on synaptic and behavioral plasticity associated to the action of brain-derived neurotrophic factor. Neuroscience. 2004;123:429–440. doi: 10.1016/j.neuroscience.2003.09.020. [DOI] [PubMed] [Google Scholar]
  210. Monteggia, L. M., Barrett, M., Powell, C. M., Berton, O., Galanis, V., Gemelli, T., … Nestler, E. J. (2004). Essential role of brain-derived neurotrophic factor in adult hippocampal function. Proceedings of the National Academy of Sciences USA, 101, 10827–10832. [DOI] [PMC free article] [PubMed]
  211. Moustafa AA, Wufong E, Servatius RJ, Pang KC, Gluck MA, Myers CE. Why trace and delay conditioning are sometimes (but not always) hippocampal dependent: a computational model. Brain Research. 2013;1493:48–67. doi: 10.1016/j.brainres.2012.11.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  212. Moyer JR, Jr, Deyo RA, Disterhoft JF. Hippocampectomy disrupts trace eye-blink conditioning in rabbits. Behavioral Neuroscience. 1990;104:243–252. doi: 10.1037/0735-7044.104.2.243. [DOI] [PubMed] [Google Scholar]
  213. Murray EA, Richmond EJ. Role of perirhinal cortex in object perception, memory, and associations. Current Opinion in Neurobiology. 2001;2:188–193. doi: 10.1016/S0959-4388(00)00195-1. [DOI] [PubMed] [Google Scholar]
  214. Nádasdy Z, Hirase H, Czurkó A, Csicsvari J, Buzsáki G. Replay and time compression of recurring spike sequences in the hippocampus. Journal of Neuroscience. 1999;19:9497–9507. doi: 10.1523/JNEUROSCI.19-21-09497.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  215. Nadel L, Bohbot V. Consolidation of memory. Hippocampus. 2001;11:56–60. doi: 10.1002/1098-1063(2001)11:1<56::AID-HIPO1020>3.0.CO;2-O. [DOI] [PubMed] [Google Scholar]
  216. Nadel L, Moscovitch M. Memory consolidation, retrograde amnesia and the hippocampal complex. Current Opinion in Neurobiology. 1997;7:217–227. doi: 10.1016/S0959-4388(97)80010-4. [DOI] [PubMed] [Google Scholar]
  217. Neufeld M, Mintz M. Involvement of the amygdala in classical conditioning of eyeblink in response in rat. Brain Research. 2001;889:112–117. doi: 10.1016/S0006-8993(00)03123-1. [DOI] [PubMed] [Google Scholar]
  218. Norman RJ, Villablanca JR, Brown KA, Schwafel JA, Buchwald JS. Classical eyeblink conditioning in the bilaterally hemispherectomized cat. Experimental Neurology. 1974;44:363–380. doi: 10.1016/0014-4886(74)90202-7. [DOI] [PubMed] [Google Scholar]
  219. Nowak AJ, Berger TW. Functional three-dimensional distribution of entorhinal projections to dentate granule cells of the in vivo rabbit hippocampus. Society for Neuroscience Abstracts. 1992;18:321. [Google Scholar]
  220. Oakley DA, Steele Russell I. Neocortical lesions and Pavlovian conditioning. Physiology & Behavior. 1972;8:915–926. doi: 10.1016/0031-9384(72)90305-8. [DOI] [PubMed] [Google Scholar]
  221. O'Keefe, J., and Nadel, L. (1978). The hippocampus as a cognitive map. Oxford, UK: Oxford University Press.
  222. Olson IR, Chun MM. Perceptual constraints on implicit learning of spatial context. Visual Cognition. 2002;9:273–302. doi: 10.1080/13506280042000162. [DOI] [Google Scholar]
  223. Olson S, Grossberg S. A neural network model for the development of simple and complex cell receptive fields within cortical maps of orientation and ocular dominance. Neural Networks. 1998;11:189–208. doi: 10.1016/S0893-6080(98)00003-3. [DOI] [PubMed] [Google Scholar]
  224. O'Reilly JX, Beckmann CF, Tomassini V, Ramnani N, Johansen-Berg H. Distinct and Overlapping Functional Zones in the Cerebellum Defined by Resting State Functional Connectivity. Cerebral Cortex. 2010;20:953–965. doi: 10.1093/cercor/bhp157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  225. O'Reilly RC, Rudy JW. Computational principles of learning in the neocortex and hippocampus. Hippocampus. 2000;10:389–397. doi: 10.1002/1098-1063(2000)10:4<389::AID-HIPO5>3.0.CO;2-P. [DOI] [PubMed] [Google Scholar]
  226. Orr WB, Berger TW. Hippocampectomy disrupts the topography of conditioned nictitating membrane responses during reversal learning. Behavioral Neuroscience. 1985;99:35–45. doi: 10.1037/0735-7044.99.1.35. [DOI] [PubMed] [Google Scholar]
  227. Oswald BB, Maddox SA, Tisdale N, Powell DA. Encoding and retrieval are differentially processed by the anterior cingulate and prelimbic cortices: A study based on trace eyeblink conditioning in the rabbit. Neurobiology of Learning and Memory. 2010;93:37–45. doi: 10.1016/j.nlm.2009.08.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  228. Otto T, Eichenbaum H. Neuronal activity in the hippocampus during delayed non-match to sample performance in rats: Evidence for hippocampal processing in recognition memory. Hippocampus. 1992;2:323–334. doi: 10.1002/hipo.450020310. [DOI] [PubMed] [Google Scholar]
  229. Palma, J., Grossberg, S., & Versace, M. (2012). Persistence and storage of activity patterns in spiking recurrent cortical networks: Modulation of sigmoid signals by after-hyperpolarization currents and acetylcholine. Frontiers in Computational Neuroscience, 6, 42. doi:10.3389/fncom.2012.00042 [DOI] [PMC free article] [PubMed]
  230. Palma J, Versace M, Grossberg S. After-hyperpolarization currents and acetylcholine control sigmoid transfer functions in a spiking cortical model. Journal of Computational Neuroscience. 2012;32:253–280. doi: 10.1007/s10827-011-0354-8. [DOI] [PubMed] [Google Scholar]
  231. Palma, J., Versace, M., and Grossberg, S. (2012). After-hyperpolarization currents and acetylcholine control sigmoid transfer functions in a spiking cortical model. Journal of Computational Neuroscience, 32, 253-280. [DOI] [PubMed]
  232. Papka M, Ivry R, Woodruff-Pak DS. Eyeblink classical conditioning and awareness revisited. American Psychological Society. 1997;8:404–408. [Google Scholar]
  233. Passingham RE. The Frontal Lobes and Voluntary Action. Oxford: Oxford University Press; 1997. [Google Scholar]
  234. Pavlov, I.P. (1927). Conditioned reflexes. London: Constable and Company. (Reprinted by Dover Publications, 1960.)
  235. Pessoa L, Japee S, Ungerleider LG. Visual awareness and the detection of fearful faces. Emotion. 2000;5:243–247. doi: 10.1037/1528-3542.5.2.243. [DOI] [PubMed] [Google Scholar]
  236. Pessoa L, Padmala S, Morland T. Fate of unattended fearful faces in the amygdala is determined by both attentional resources and cognitive modulation. NeuroImage. 2005;28:249–255. doi: 10.1016/j.neuroimage.2005.05.048. [DOI] [PMC free article] [PubMed] [Google Scholar]
  237. Phillips HS, Hains JM, Laramee GR, Rosenthal A, Winslow JW. Widespread expression of BDNF but not NT3 by target areas of basal forebrain cholinergic neurons. Science. 1990;250:290–294. doi: 10.1126/science.1688328. [DOI] [PubMed] [Google Scholar]
  238. Pilly PK, Grossberg S. How do spatial learning and memory occur in the brain? Coordinated learning of entorhinal grid cells and hippocampal place cells. Journal of Cognitive Neuroscience. 2012;24:1031–1054. doi: 10.1162/jocn_a_00200. [DOI] [PubMed] [Google Scholar]
  239. Pilly, P.K., and Grossberg, S. (2014) How does the modular organization of entorhinal grid cells develop? Frontiers in Human Neuroscience, doi:10.3389/fnhum.2014.0037. [DOI] [PMC free article] [PubMed]
  240. Pollen DA. On the neural correlates of visual perception. Cerebral Cortex. 1999;9:4–19. doi: 10.1093/cercor/9.1.4. [DOI] [PubMed] [Google Scholar]
  241. Port RL, Mikhail AA, Patterson MM. Differential effects of hippocampectomy on classically conditioned rabbit nictitating membrane response related to interstimulus interval. Behavioral Neuroscicience. 1985;99:200–208. doi: 10.1037/0735-7044.99.2.200. [DOI] [PubMed] [Google Scholar]
  242. Port RL, Romano AG, Steinmetz JE, Mikhail AA, Patterson MM. Retention and acquisition of classical trace conditioned responses by rabbits with hippocampal lesions. Behavioral Neuroscience. 1986;100:745–752. doi: 10.1037/0735-7044.100.5.745. [DOI] [PubMed] [Google Scholar]
  243. Post RM. Role of BDNF in bipolar and unipolar disorder: Clinical and theoretical implications. Journal of Psychiatry Research. 2007;41:979–990. doi: 10.1016/j.jpsychires.2006.09.009. [DOI] [PubMed] [Google Scholar]
  244. Powell DA, Churchwell J. Mediodorsal thalamic lesions impair trace eyeblink conditioning in the rabbit. Learning & Memory. 2002;9:10–17. doi: 10.1101/lm.45302. [DOI] [PMC free article] [PubMed] [Google Scholar]
  245. Powell DA, Skaggs H, Churchwell J, McLauglin J. Posttraining lesions of the medial prefrontal cortex impair performance of Pavlovian eyeblink conditioning but have no effect on concomitant heart rate changes in rabbits (Oryctolagus cuniculus) Behavioral Neuroscience. 2001;115:1029–1038. doi: 10.1037/0735-7044.115.5.1029. [DOI] [PubMed] [Google Scholar]
  246. Purves D. Body and brain: A trophic theory of neural connections. Cambridge, MA: Harvard University Press; 1988. [DOI] [PubMed] [Google Scholar]
  247. Qin YL, McNaughton BL, Skaggs WE, Barnes CA. Memory reprocessing in corticocortical and hippocampalcortical neuronal ensembles. Philosophical Transactions of the Royal Society of London. B: Biological Sciences. 1997;352:1525–1533. doi: 10.1098/rstb.1997.0139. [DOI] [PMC free article] [PubMed] [Google Scholar]
  248. Raizada RDS, Grossberg S. Towards a theory of the laminar architecture of cerebral cortex: Computational clues from the visual system. Cerebral Cortex. 2003;13:100–113. doi: 10.1093/cercor/13.1.100. [DOI] [PubMed] [Google Scholar]
  249. Rattiner LM, Davis M, Ressler KJ. Brain-derived neurotrophic factor in amygdala-dependent learning. The Neuroscientist. 2005;11:323–333. doi: 10.1177/1073858404272255. [DOI] [PubMed] [Google Scholar]
  250. Rolls ET. The orbitofrontal cortex. In: Roberts AC, Robbins TW, Weiskrantz L, editors. The Prefrontal Cortex: Executive and Cognitive Functions. Oxford: Oxford University Press; 1998. pp. 67–86. [Google Scholar]
  251. Rolls ET. The orbitofrontal cortex and reward. Cerebral Cortex. 2000;10:284–294. doi: 10.1093/cercor/10.3.284. [DOI] [PubMed] [Google Scholar]
  252. Rossato JI, Bevilaqua LRM, Izquierdo I, Medina JH, Cammarota M. Dopamine controls persistence of long-term memory storage. Science. 2009;325:1017–1020. doi: 10.1126/science.1172545. [DOI] [PubMed] [Google Scholar]
  253. Sakurai Y. Hippocampal cells have behavioral correlates during the performance of an auditory working memory task in the rat. Behavioral Neuroscience. 1990;104:253–263. doi: 10.1037/0735-7044.104.2.253. [DOI] [PubMed] [Google Scholar]
  254. Schinder AF, Berninger B, Poo M-m. Postsynaptic specificity of neurotrophin-induced presynaptic potentiation. Neuron. 2000;25:151–163. doi: 10.1016/S0896-6273(00)80879-X. [DOI] [PubMed] [Google Scholar]
  255. Schmajuk NA, Lam P, Christiansen BA. Hippocamectomy disrupts latent inhibition of the rat eyeblink conditioning. Physiology and Behavior. 1994;55:597–601. doi: 10.1016/0031-9384(94)90122-8. [DOI] [PubMed] [Google Scholar]
  256. Schmaltz LW, Theios J. Acquisition and extinction of a classically conditioned response in hippocampectomized rabbits. Journal of Comparative Physiological Psychology. 1972;79:328–333. doi: 10.1037/h0032531. [DOI] [PubMed] [Google Scholar]
  257. Schoenbaum G, Eichenbaum H. Information coding in the rodent prefrontal cortex: II. Ensemble activity in orbitofrontal cortex. Journal of Neurophysiology. 1995;74:751–762. doi: 10.1152/jn.1995.74.2.751. [DOI] [PubMed] [Google Scholar]
  258. Schoenbaum G, Setlow B, Saddoris MP, Gallagher M. Encoding predicted outcome and acquired value in orbitofrontal cortex during cue sampling depends upon input from basolateral amygdala. Neuron. 2003;39:855–867. doi: 10.1016/S0896-6273(03)00474-4. [DOI] [PubMed] [Google Scholar]
  259. Schultz W. Predictive reward signal of dopamine neurons. Journal of Neurophysiology. 1998;80:1–27. doi: 10.1152/jn.1998.80.1.1. [DOI] [PubMed] [Google Scholar]
  260. Schultz W, Apicelli P, Scarnati E, Ljungberg T. Neuronal activity in monkey ventral striatum related to the expectation of reward. Journal of Neuroscience. 1992;12:4595–4610. doi: 10.1523/JNEUROSCI.12-12-04595.1992. [DOI] [PMC free article] [PubMed] [Google Scholar]
  261. Schuman EM. Neurotrophin regulation of synaptic transmission. Current Opinion in Neurobiology. 1999;9:105–109. doi: 10.1016/S0959-4388(99)80013-0. [DOI] [PubMed] [Google Scholar]
  262. Sears LL, Steinmetz JE. Acquisition of classically conditioned-related activity in the hippocampus is affected by lesions of the cerebellar interpositus nucleus. Behavioral Neuroscience. 1990;104:681–92. doi: 10.1037/0735-7044.104.5.681. [DOI] [PubMed] [Google Scholar]
  263. Sherman SM, Guillery RW. The role of thalamus in the flow of information to cortex. Philosophical Transactions of the Royal Society of London. B: Biological Scien. 2003;357:1695–1708. doi: 10.1098/rstb.2002.1161. [DOI] [PMC free article] [PubMed] [Google Scholar]
  264. Shors TJ, Weiss C, Thompson RF. Stress-induced facilitation of classical conditioning. Science. 1992;257:537–539. doi: 10.1126/science.1636089. [DOI] [PubMed] [Google Scholar]
  265. Siapas AG, Lubenov EV, Wilson MA. Prefrontal phase locking to hippocampal theta oscillations. Neuron. 2005;46:141–151. doi: 10.1016/j.neuron.2005.02.028. [DOI] [PubMed] [Google Scholar]
  266. Siegel JJ, Kalmback B, Chitwood RA, Mauk MD. Persistent activity in a cortical-to-subcortical circuit: bridging the temporal gap in trace eyelid conditioning. Journal of Neurophysiology. 2012;107:50–64. doi: 10.1152/jn.00689.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  267. Siegel J.J., Taylor W., Gray R., Kalmbach B., Zemelman B.V., Desai N.S., Johnston D., & Chitwood, R.A. (2015). Trace Eyeblink Conditioning in Mice Is Dependent upon the Dorsal Medial Prefrontal Cortex, Cerebellum, and Amygdala: Behavioral Characterization and Functional Circuitry (1,2,3). Eneuro. 2. PMID 26464998 doi: 10.1523/ENEURO.0051-14.2015 [DOI] [PMC free article] [PubMed]
  268. Sillito AM, Jones HE, Gerstein GL, West DC. Feature-linked synchronization of thalamic relay cell firing induced by feedback from the visual cortex. Nature. 1994;369:479–482. doi: 10.1038/369479a0. [DOI] [PubMed] [Google Scholar]
  269. Silver MR, Grossberg S, Bullock D, Histed MH, Miller EK. A neural model of sequential movement planning and control of eye movements: Item-order-rank working memory and saccade selection by the supplementary eye fields. Neural Networks. 2011;26:29–58. doi: 10.1016/j.neunet.2011.10.004. [DOI] [PubMed] [Google Scholar]
  270. Simon B, Knuckley B, Churchwell J, Powell DA. Post-training lesions of the medial prefrontal cortex interfere with subsequent performance of trace eyeblink conditioning. Journal of Neuroscience. 2005;25:10740–10746. doi: 10.1523/JNEUROSCI.3003-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  271. Singer W. Time as coding space. Current Opinion in Neurobiology. 1999;9:189–194. doi: 10.1016/S0959-4388(99)80026-9. [DOI] [PubMed] [Google Scholar]
  272. Sireteanu R, Rettenbach R. Perceptual learning in visual search: Fast, enduring, but non-specific. Vision Research. 1995;35:2037–2043. doi: 10.1016/0042-6989(94)00295-W. [DOI] [PubMed] [Google Scholar]
  273. Skaggs WE, McNaughton BL. Replay of neuronal firing sequences in rat hippocampus during sleep following spatial experience. Science. 1996;271:1870–1873. doi: 10.1126/science.271.5257.1870. [DOI] [PubMed] [Google Scholar]
  274. Smith MC. CS-US interval and US intensity in classical conditioning of the rabbit’s nictitating membrane response. Journal of Comparative and Physiological Psychology. 1968;66:679–687. doi: 10.1037/h0026550. [DOI] [PubMed] [Google Scholar]
  275. Smythe JW, Colom LV, Bland BH. The extrinsic modulation of hippocampal theta depends on the coactivation of cholinergic and GABA-ergic medial septal inputs. Neuroscience & Biobehavioral Reviews. 1992;16:289–308. doi: 10.1016/S0149-7634(05)80203-9. [DOI] [PubMed] [Google Scholar]
  276. Solomon PR, Moore JW. Latent inhibition and stimulus generalization of the classically conditioned membrane response in rabbits (Oryctolagus cuniculus) following dorsal hippocampal ablation. Journal of Comparative Physiological Psychology. 1975;89:1192–1203. doi: 10.1037/h0077183. [DOI] [PubMed] [Google Scholar]
  277. Solomon PR, Groccia-Ellison M, Levine E, Blanchard S, Pendlebury WW. Do temporal relationships in conditioning change across the life span? Perspectives from eyeblink conditioning in humans and rabbits. Annals of the New York Academy of Science. 1990;608:212–238. doi: 10.1111/j.1749-6632.1990.tb48898.x. [DOI] [PubMed] [Google Scholar]
  278. Sosina, V.D. (1992). The EEG analysis of the interrelationships of structures of the thalamofrontal system during the recovery of conditioned reflex behavior of amygdalectomized rats. Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Moscow. Translated by I.P. Pavlova from Zhurnal Vysshei Nervnoi Deyatel’nosti imeni, 42, 672–678. Plenum Publishing Corporation, 0097-0549/93/2305-0398, pp. 398–403.
  279. Squire LR, Alverez P. Retrograde amnesia and memory consolidation: a neurobiological perspective. Current Opinion in Neurobiology. 1995;5:178–183. doi: 10.1016/0959-4388(95)80023-9. [DOI] [PubMed] [Google Scholar]
  280. Squire LR, Cohen NJ. Human memory and amnesia. In: Lynch G, McGaugh J, Weinberger NM, editors. Neurobiology of Learning and Memory. New York: Guilford Press; 1984. pp. 3–64. [Google Scholar]
  281. Stanley DA, Rubin N. Rapid detection of salient regions: Evidence from apparent motion. Journal of Vision. 2005;5:690–701. doi: 10.1167/5.9.4. [DOI] [PubMed] [Google Scholar]
  282. Staubli U, Lynch G. Stable hippocampal long-term potentiation elicited by theta pattern stimulation. Brain Research. 1987;435:227–234. doi: 10.1016/0006-8993(87)91605-2. [DOI] [PubMed] [Google Scholar]
  283. Steriade M. Coherent oscillations and short-term plasticity in corticothalamic networks. Trends in Neuroscience. 1999;22:337–345. doi: 10.1016/S0166-2236(99)01407-1. [DOI] [PubMed] [Google Scholar]
  284. Takashima A, Nieuwenhuis ILC, Jensen O, Talamini LM, Rijpkema M, Guillén Fernández G. Shift from Hippocampal to Neocortical Centered Retrieval Network with Consolidation. Journal of Neuroscience. 2009;29:10087–10093. doi: 10.1523/JNEUROSCI.0799-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  285. Takehara K, Kawahara S, Krino Y. Time-dependent reorganization of the brain components underlying memory retention in trace eyeblink conditioning. Journal of Neuroscience. 2003;23:9897–9905. doi: 10.1523/JNEUROSCI.23-30-09897.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  286. Thoenen H. Neurotrophins and neural plasticity. Science. 1995;270:593–598. doi: 10.1126/science.270.5236.593. [DOI] [PubMed] [Google Scholar]
  287. Thompson RF. The neural basis of basic associative learning of discrete behavioral responses. Trends in Neurosciences. 1988;11:152–155. doi: 10.1016/0166-2236(88)90141-5. [DOI] [PubMed] [Google Scholar]
  288. Thompson, R. F., Clark, G. A., Donegan, N. H., Lavond, G. A., Lincoln, D. G., Maddon, J., … McCormick, D. A. (1987). Neuronal substrates of discrete, defensive conditioned reflexes, conditioned fear states, and their interactions in the rabbit. In I. Gormenzano, W. F. Prokasy, & R. F. Thompson (Eds.), Classical Conditioning (3rd ed., pp. 371–399). Hillsdale, NJ: Erlbaum Associates.
  289. Thompson LT, Moyer JR, Jr, Disterhoft JF. Transient changes in excitability of rabbit CA3 neurons with a time course appropriate to support memory consolidation. Journal of Neurophysiology. 1996;76:1836–1849. doi: 10.1152/jn.1996.76.3.1836. [DOI] [PubMed] [Google Scholar]
  290. Tieu KH, Keidel AL, McGann JP, Faulkner B, Brown TH. Perirhinal-amygdala circuit-level computational model of temporal encoding in fear conditioning. Psychobiology. 1999;27:1–25. [Google Scholar]
  291. Tokuoka H, Saito T, Yorifugi H, Kishimoto T, Hisanaga S. Brain-derived neurotrophic factor-induced phosphorylation of neurofilament-H subunit in primary cultures of embryo rat cortical neurons. Journal of Cell Science. 2000;113:1059–1068. doi: 10.1242/jcs.113.6.1059. [DOI] [PubMed] [Google Scholar]
  292. Tsumoto T, Creutzfeldt OD, Legéndy CF. Functional organization of the corticofugal system from visual cortex to lateral geniculate nucleus in the cat. Experimental Brain Research. 1978;32:345–364. doi: 10.1007/BF00238707. [DOI] [PubMed] [Google Scholar]
  293. Tucker KL, Meyer M, Barde YA. Neurotrophins are required for nerve growth during development. Nature Neuroscience. 2001;4:29–37. doi: 10.1038/82868. [DOI] [PubMed] [Google Scholar]
  294. Tulving E. Episodic and semantic memory. In: Tulving E, Donaldson W, editors. Organization of Memory. New York, NY: Academic press; 1972. [Google Scholar]
  295. Tyler WJ, Alonso M, Bramham CR, Pozzo-Miller LD. From acquisition to consolidation: On the role of brain-derived neurotrophic factor signaling in hippocampal-dependent learning. Learning & Memory. 2002;9:224–237. doi: 10.1101/lm.51202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  296. Ungerleider LG, Mishkin M. Two cortical visual systems: Separation of appearance and location of objects. In: Ingle DL, Goodale MA, Mansfield RJW, editors. Analysis of Visual Behavior. Cambridge: MIT Press; 1982. pp. 549–586. [Google Scholar]
  297. Van Essen DC, Maunsell JHR. Hierarchical organization and functional streams in the visual cortex. Trends in Neurosciences. 1983;6:370–375. doi: 10.1016/0166-2236(83)90167-4. [DOI] [Google Scholar]
  298. van Ooyen A, Willshaw DJ. Competition for neurotrophic factor in the development of nerve connections. Proceedings of the Royal Society of London. B: Biological Sciences. 1999;266:883–892. doi: 10.1098/rspb.1999.0719. [DOI] [PMC free article] [PubMed] [Google Scholar]
  299. Vazdarjanova A, McGaugh JL. Basolateral amygdala is not a critical locus for memory of contextual fear conditioning. Proceeding of the National Academy of Science. U.S.A. 1998;95:15003–15007. doi: 10.1073/pnas.95.25.15003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  300. Vertes RP, Hoover WB, Di Prisco GV. Theta rhythm of the hippocampus: Subcortical control and functional significance. Behavioral and Cognitive Neuroscience Reviews. 2004;3:173–200. doi: 10.1177/1534582304273594. [DOI] [PubMed] [Google Scholar]
  301. Vinogradova OS. Functional organization of the limbic system in the process of registration of information: facts and hypotheses. In: Isaacson RC, Pribram KH, editors. The Hippocampus: Volume 2: Neurophysiology and behavior. New York: Plenum Press; 1975. pp. 3–70. [Google Scholar]
  302. Wagman JB, Shockley K, Reley MA, Tervey MT. Attunement, calibration and exploration in fast haptic perceptual learning. Journal of Motor Behavior. 2001;33:323–327. doi: 10.1080/00222890109601917. [DOI] [PubMed] [Google Scholar]
  303. Walker AG, Steinmetz JE. Hippocampal lesions in rats differentially affect long- and short-trace eyeblink conditioning. Physiology & Behavior. 2008;93:570–578. doi: 10.1016/j.physbeh.2007.10.018. [DOI] [PubMed] [Google Scholar]
  304. Weible AP, McEchron MD, Disterhoft JF. Cortical involvement in acquisition and extinction of trace eyeblink conditioning. Behavioral Neuroscience. 2000;114:1058–1067. doi: 10.1037/0735-7044.114.6.1058. [DOI] [PubMed] [Google Scholar]
  305. Weiskrantz L, Warrington EK. Conditioning in amnesic patients. Neuropsychologia. 1979;17:187–194. doi: 10.1016/0028-3932(79)90009-5. [DOI] [PubMed] [Google Scholar]
  306. Weiss C, Disterhoft JF. Exploring Prefrontal Cortical Memory Mechanisms with Eyeblink Conditioning. Behavioral Neuroscience. 2011;114:318–326. doi: 10.1037/a0023520. [DOI] [PMC free article] [PubMed] [Google Scholar]
  307. Weiss C, Thompson RF. The effects of age on eyeblink conditioning in the freely moving rat: Optimizing the conditioning parameters. Behavioral Neuroscience. 1991;113:1100–1105. doi: 10.1037/0735-7044.113.5.1100. [DOI] [PubMed] [Google Scholar]
  308. Weiss C, Thompson RF. Trace eyeblink conditioning in the freely moving Fischer-344 rat. Neurobiology of Aging. 1991;12:249–254. doi: 10.1016/0197-4580(91)90105-S. [DOI] [PubMed] [Google Scholar]
  309. Wilson FAW, Scalaidhem O, Goldman-Rakic PS. Dissociation of object and spatial processing domains in primate prefrontal cortex. Science. 1993;260:1955–1958. doi: 10.1126/science.8316836. [DOI] [PubMed] [Google Scholar]
  310. Wilson MA. Hippocampal memory formation, plasticity, and the role of sleep. Neurobiology of Learning and Memory. 2002;78:565–569. doi: 10.1006/nlme.2002.4098. [DOI] [PubMed] [Google Scholar]
  311. Wilson MA, McNaughton BL. Reactivation of hippocampal ensemble memories during sleep. Science. 1994;265:676–679. doi: 10.1126/science.8036517. [DOI] [PubMed] [Google Scholar]
  312. Winocur G, Moscovitch M, Bontempi B. Memory formation and long-term retention in humans and animals: Convergence towards a transformation account of hippocampal–neocortical interactions. Neuropsycologia. 2010;48:2339–2356. doi: 10.1016/j.neuropsychologia.2010.04.016. [DOI] [PubMed] [Google Scholar]
  313. Woodruff-Pak DS. Classical eye-blink conditioning in H.M.: Delay and trace paradigms. Behavioral Neuroscience. 1993;107:911–925. doi: 10.1037/0735-7044.107.6.911. [DOI] [PubMed] [Google Scholar]
  314. Woodruff-Pak DS. Eyeblink classical conditioning differentiates normal aging from Alzheimer's disease. Integrative Physiological and Behavioral Science. 2001;36:87–108. doi: 10.1007/BF02734044. [DOI] [PubMed] [Google Scholar]
  315. Woodruff-Pak DS, Disterhoft JF. Where is the trace in trace conditioning? Trends in Neurosciences. 2007;31:105–112. doi: 10.1016/j.tins.2007.11.006. [DOI] [PubMed] [Google Scholar]
  316. Woodruff-Pak DS, Steinmetz JE, editors. Eyeblink classical conditioning: Volume I: Applications in humans. Boston: Kluwer Academic Publishers; 2000. [Google Scholar]
  317. Woodruff-Pak DS, Steinmetz JE, editors. Eyeblink classical conditioning: Volume II: Animal models. Boston: Kluwer Academic Publishers; 2000. [Google Scholar]
  318. Yazdanbakhsh A, Grossberg S. Fast synchronization of perceptual grouping in laminar visual cortical circuits. Neural Networks. 2004;17:707–718. doi: 10.1016/j.neunet.2004.06.005. [DOI] [PubMed] [Google Scholar]
  319. Yeo CH, Hardiman MJ, Moore JW, Steele Russell I. Trace conditioning of the nictitating membrane response in decorticate rabbits. Behavioural Brain Research. 1984;11:85–88. doi: 10.1016/0166-4328(84)90010-X. [DOI] [PubMed] [Google Scholar]
  320. Zang HT, Li LY, Zou XL, Song XB, Hu YL, Feng ZT, Wang TTH. Immunohistological distribution of NGF, BDNF, NT-3, and NT-4 in adult rhesus monkey brains. Journal of Histochemistry and Cytochemistry. 2007;55:1–19. doi: 10.1369/jhc.6A6952.2006. [DOI] [PubMed] [Google Scholar]

Articles from Cognitive, Affective & Behavioral Neuroscience are provided here courtesy of Springer

RESOURCES