Skip to main content
eLife logoLink to eLife
. 2021 Oct 14;10:e65309. doi: 10.7554/eLife.65309

The generation of cortical novelty responses through inhibitory plasticity

Auguste Schulz 1,2,, Christoph Miehl 1,3,, Michael J Berry II 4, Julijana Gjorgjieva 1,3,
Editors: Maria N Geffen5, Joshua I Gold6
PMCID: PMC8516419  PMID: 34647889

Abstract

Animals depend on fast and reliable detection of novel stimuli in their environment. Neurons in multiple sensory areas respond more strongly to novel in comparison to familiar stimuli. Yet, it remains unclear which circuit, cellular, and synaptic mechanisms underlie those responses. Here, we show that spike-timing-dependent plasticity of inhibitory-to-excitatory synapses generates novelty responses in a recurrent spiking network model. Inhibitory plasticity increases the inhibition onto excitatory neurons tuned to familiar stimuli, while inhibition for novel stimuli remains low, leading to a network novelty response. The generation of novelty responses does not depend on the periodicity but rather on the distribution of presented stimuli. By including tuning of inhibitory neurons, the network further captures stimulus-specific adaptation. Finally, we suggest that disinhibition can control the amplification of novelty responses. Therefore, inhibitory plasticity provides a flexible, biologically plausible mechanism to detect the novelty of bottom-up stimuli, enabling us to make experimentally testable predictions.

Research organism: None

Introduction

In an ever-changing environment, animals must rapidly extract behaviorally useful information from sensory stimuli. Appropriate behavioral adjustments to unexpected changes in stimulus statistics are fundamental for the survival of an animal. We still do not fully understand how the brain detects such changes reliably and quickly. Local neural circuits perform computations on incoming sensory stimuli in an efficient manner by maximizing transmitted information or minimizing metabolic cost (Simoncelli and Olshausen, 2001; Barlow, 2013). Repeated or predictable stimuli do not provide new meaningful information. As a consequence, one should expect that responses to repeated stimuli are suppressed – a phenomenon postulated by the framework of predictive coding (Clark, 2013; Spratling, 2017). Recent experiments have demonstrated that sensory circuits across different modalities can encode a sequence or expectation violation and can detect novelty (Keller et al., 2012; Natan et al., 2015; Zmarz and Keller, 2016; Hamm and Yuste, 2016; Homann et al., 2017). The underlying neuronal and circuit mechanisms behind expectation violation and novelty detection, however, remain elusive.

A prominent paradigm used experimentally involves two types of stimuli, the repeated (or frequent) and the novel (or deviant) stimulus (Näätänen et al., 1982; Fairhall, 2014; Natan et al., 2015; Homann et al., 2017; Weber et al., 2019). Here, the neuronal responses to repeated stimuli decrease, a phenomenon that is often referred to as adaptation (Fairhall, 2014). Adaptation can occur over a wide range of timescales, which range from milliseconds to seconds (Ulanovsky et al., 2004; Lundstrom et al., 2010), and to multiple days in the case of behavioral habituation (Haak et al., 2014; Ramaswami, 2014). We refer to the elevated neuronal response to a novel stimulus, compared to the response to a repeated stimulus, as a ‘novelty response’ (Homann et al., 2017). Responses to repeated versus novel stimuli, more generally, have also been studied on different spatial scales spanning the single neuron level, cortical microcircuits and whole brain regions. At the scale of whole brain regions, a widely studied phenomenon is the mismatch negativity (MMN), which is classically detected in electroencephalography (EEG) data and often based on an auditory or visual ‘oddball’ paradigm (Näätänen et al., 1982; Hamm and Yuste, 2016). The occasional presentation of the so-called oddball stimulus among frequently repeated stimuli leads to a negative deflection in the EEG signal – the MMN (Näätänen et al., 2007).

Experiments at the cellular level typically follow the oddball paradigm with two stimuli that, if presented in isolation, would drive a neuron equally strongly. However, when one stimulus is presented frequently and the other rarely, the deviant produces a stronger response relative to the frequent stimulus (Ulanovsky et al., 2003; Nelken, 2014; Natan et al., 2015). The observed reduction in response to the repeated, but not the deviant, stimulus has been termed stimulus-specific adaptation (SSA) and has been suggested to contribute to the MMN (Ulanovsky et al., 2003). SSA has been observed in multiple brain areas, most commonly reported in the primary auditory cortex (Ulanovsky et al., 2003; Yaron et al., 2012; Natan et al., 2015; Seay et al., 2020) and the primary visual cortex (Movshon and Lennie, 1979; Hamm and Yuste, 2016; Vinken et al., 2017; Homann et al., 2017). Along the visual pathway, SSA has also been found at different earlier stages including the retina (Schwartz et al., 2007; Geffen et al., 2007; Schwartz and Berry, 2008) and the visual thalamic nuclei (Dhruv and Carandini, 2014; King et al., 2016).

To unravel the link between multiple spatial and temporal scales of adaptation, a variety of mechanisms has been proposed. Most notably, modeling studies have explored the role of adaptive currents, which reduce the excitability of the neuron (Brette and Gerstner, 2005), and short-term depression of excitatory feedforward synapses (Tsodyks et al., 1998). Most models of SSA in primary sensory areas of the cortex focus on short-term plasticity and the depression of thalamocortical feedforward synapses (Mill et al., 2011a; Mill et al., 2011b; Park and Geffen, 2020). The contribution of other mechanisms has been under-explored in this context. Recent experimental studies suggest that inhibition and the plasticity of inhibitory synapses shape the responses to repeated and novel stimuli (Chen et al., 2015; Kato et al., 2015; Natan et al., 2015; Hamm and Yuste, 2016; Natan et al., 2017; Heintz et al., 2020). Natan and colleagues observed that in the mouse auditory cortex, both parvalbumin-positive (PV) and somatostatin-positive (SOM) interneurons contribute to SSA (Natan et al., 2015). Furthermore, neurons that are more strongly adapted receive stronger inhibitory input than less adapted neurons, suggesting potentiation of inhibitory synapses as an underlying mechanism (Natan et al., 2017). In the context of habituation, inhibitory plasticity has been previously hypothesized to be the driving mechanism behind the reduction of neural responses to repeated stimuli (Ramaswami, 2014; Barron et al., 2017). Habituated behavior in Drosophila, for example, results from prolonged activation of an odor-specific excitatory subnetwork, which leads to the selective strengthening of inhibitory synapses onto the excitatory subnetwork (Das et al., 2011; Glanzman, 2011; Ramaswami, 2014; Barron et al., 2017).

Here, we focus on the role of inhibitory spike-timing-dependent plasticity (iSTDP) in characterizing neuronal responses to repeated and novel stimuli at the circuit level. We base our study on a recurrent spiking neural network model of the mammalian cortex with biologically inspired plasticity mechanisms that can generate assemblies in connectivity and attractors in activity to represent the stimulus-specific activation of specific sub-circuits (Litwin-Kumar and Doiron, 2014; Zenke et al., 2015; Wu et al., 2020). We model excitatory and inhibitory neurons and include stimulus-specific input not only to the excitatory but also to the inhibitory population, as found experimentally (Ma et al., 2010; Griffen and Maffei, 2014; Znamenskiy et al., 2018). This additional assumption readily leads to the formation of specific inhibitory-to-excitatory connections through inhibitory plasticity (Vogels et al., 2011), as suggested by recent experiments (Lee et al., 2014; Xue et al., 2014; Znamenskiy et al., 2018; Najafi et al., 2020).

We demonstrate that this model network can generate excess population activity when novel stimuli are presented as violations of repeated stimulus sequences. Our framework identifies plasticity of inhibitory synapses as a sufficient mechanism to explain population novelty responses and adaptive phenomena on multiple timescales. In addition, stimulus-specific inhibitory connectivity supports adaptation to specific stimuli (SSA). This finding reveals that the network configuration encompasses computational capabilities beyond those of intrinsic adaptation. Furthermore, we suggest disinhibition to be a powerful regulator of the amplification of novelty responses. Our modeling framework enables us to formulate additional experimentally testable predictions. Most intriguing, we hypothesize that neurons in primary sensory cortex may not signal the violation of periodicity of a sequence based on bottom-up input, but rather adapt to the distribution of presented stimuli.

Results

A recurrent neural network model with plastic inhibition can generate novelty responses

Recent experimental studies have indicated an essential role of inhibitory circuits and inhibitory plasticity in adaptive phenomena and novelty responses (Chen et al., 2015; Kato et al., 2015; Natan et al., 2015; Hamm and Yuste, 2016; Natan et al., 2017; Heintz et al., 2020). To understand if and how plastic inhibitory circuits could explain the emergence of novelty responses, we built a biologically plausible spiking neuronal network model of recurrently connected 4000 excitatory and 1000 inhibitory neurons based on recent experimental findings on tuning, connectivity, and inhibitory and excitatory STDP in the cortex (Materials and methods). Excitatory-to-excitatory (E-to-E) synapses were plastic based on the triplet spike-timing-dependent plasticity (eSTDP) rule (Sjöström et al., 2001; Pfister and Gerstner, 2006; Gjorgjieva et al., 2011; Figure 1—figure supplement 1). The triplet STDP rule enabled the formation of strong bidirectional connections among similarly selective neurons (Gjorgjieva et al., 2011; Montangie et al., 2020). Plasticity of connections from inhibitory to excitatory neurons was based on an inhibitory STDP (iSTDP) rule measured experimentally (D'amour and Froemke, 2015), and shown to stabilize excitatory firing rate dynamics in recurrent networks (Vogels et al., 2011; Figure 1—figure supplement 1). In contrast to other frameworks which have found short-term plasticity as key for capturing adaptation phenomena, we only included long-term plasticity and did not explicitly model additional adaptation mechanisms.

We targeted different subsets of excitatory and inhibitory neurons with different external stimuli, to model that these neurons are stimulus-specific (‘tuned’) to a given stimulus (Figure 1A, left, see Materials and methods). One neuron could be driven by multiple stimuli. Starting from an initially randomly connected network, presenting tuned input led to the emergence of excitatory assemblies, which are strongly connected, functionally related subsets of excitatory neurons (Figure 1—figure supplement 2C, left). Furthermore, tuned input also led to the stimulus-specific potentiation of inhibitory-to-excitatory connections (Figure 1—figure supplement 2E, left). We refer to this part of structure formation as the ‘pretraining phase’ of our simulations (Materials and methods). This pretraining phase imprinted structure in the network prior to the actual stimulation paradigm as a model of the activity-dependent refinement of structured connectivity during early postnatal development (Thompson et al., 2017).

Figure 1. Generation of novelty responses in a recurrent plastic neural network model.

(A) Left: A recurrently connected network of excitatory (E) neurons (blue triangles) and inhibitory (I) neurons (red circles) receiving tuned input. Excitatory neurons tuned to a sample stimulus A are highlighted in dark blue, the inhibitory counterparts in dark red. E-to-E synapses and I-to-E synapses were plastic, and all other synapses were fixed. Right: Schematic of the stimulation protocol. Multiple stimuli (A, B, and C) were presented in a sequence (ABC). Each sequence was repeated n times in a sequence block. In the second-to-last sequence, the last stimulus was replaced by a novel stimulus (N). Multiple sequence blocks followed each other without interruption, with each block containing sequences of different stimuli. (B) Population average firing rate of all excitatory neurons as a function of time after the onset of a sequence block. Activity was averaged (solid line) across multiple non-repeated sequence blocks (transparent lines: individual blocks). A novel stimulus (dark gray) was presented as the last stimulus of the second-to-last sequence. (C) Spiking activity in response to a sequence (ABC) in a subset of 1000 excitatory neurons where the neurons were sorted according to the stimulus from which they receive tuned input. A neuron can receive input from multiple stimuli and can appear more than once in this raster plot. (D) A random unsorted subset of 50 excitatory neurons from panel C. Time was locked to the sequence block onset.

Figure 1.

Figure 1—figure supplement 1. Excitatory and inhibitory synaptic plasticity functions for different pairing frequencies.

Figure 1—figure supplement 1.

Synaptic weight change as a function of the time between pre- and postsynaptic spikes after the induction of 60 spike pairs at different pairing frequencies (0.1, 10, 20, and 50 Hz). Left: Triplet spike-timing-dependent plasticity rule of excitatory-to-excitatory connections ΔJEE. Right: Inhibitory spike-timing-dependent plasticity rule of inhibitory-to-excitatory connections ΔJEI.
Figure 1—figure supplement 2. Strong connections form between excitatory and excitatory, as well as inhibitory and excitatory neuron groups that are tuned to the same stimulus.

Figure 1—figure supplement 2.

(A) A recurrently connected network of excitatory (E) neurons (blue triangles) and inhibitory (I) neurons (red circles) receives tuned input. Excitatory neurons tuned to a sample stimulus A are highlighted in dark blue, the inhibitory counterparts in dark red. Left: Average excitatory weights within stimulus-specific assembly A are determined by averaging all E-to-E weights within an assembly. Center: Average inhibitory weights onto stimulus-specific assembly A are determined by averaging the weights from all inhibitory neurons onto stimulus-specific assembly A. Right: Stimulus-specific inhibitory weights onto stimulus-specific assembly A are determined by averaging only the weights from the inhibitory neurons that are also tuned to stimulus A (dark red). (B) Evolution of the average excitatory weights corresponding to all repeated and 10 sample novel stimuli. Colored traces mark three stimulus-specific assemblies in sequence 1: A, B, and C. The legend is shared with panel D. (C) Left: Weight matrix of average excitatory (E–to–E) weights after pretraining (see panel B) for all repeated and 10 sample novel stimuli separated by the gray lines. The size of assemblies can be slightly different. The weights across assemblies (off-diagonal) are small compared to the weights within assemblies (diagonal). After pretraining, there is no apparent difference in connectivity for novel and repeated stimuli. The diagonal elements correspond to the traces plotted in B, that is, the time evolution of the bottom-left square, highlighted in dark blue corresponds to the dark blue trace in panel B. Right: Same as left after the repeated sequence stimulation paradigm (see final in panel B). Here, assembly weights for repeated stimuli are stronger. (D) Same as panel B for the average inhibitory weights. Arrows indicate end points of the pretraining and whole stimulation phase. (E) Left: Weight matrix of stimulus-specific inhibitory (I–to–E) weights after pretraining (see panel D) for all repeated and 10 sample novel stimuli separated by the gray lines. The size of assemblies can be slightly different. Inhibition is stimulus-specific (diagonal stronger than off diagonal), that is, the weights from inhibitory neurons tuned to a given stimulus onto excitatory neurons that are tuned to the same stimulus (diagonal) are larger that onto excitatory neurons that are tuned to different stimuli (off-diagonal). After pretraining, there is no apparent difference in connectivity for novel and repeated stimuli. Here, an entire column average approximately corresponds to the traces plotted in D, that is, the time evolution of the averaged first column, highlighted in dark blue corresponds to the dark blue trace in panel D. It is only approximate, since individual inhibitory neurons can be tuned to multiple repeated and novel stimuli and hence contribute to multiple averages shown here in the matrix. Right: Same as left after the repeated sequence stimulation paradigm (see final in panel D). Here, excitatory assemblies tuned to repeated stimuli receive more inhibition than those tuned to novel stimuli.
Figure 1—figure supplement 3. Different stimuli in the pretraining and stimulation phases generate similar synaptic weight and firing rate dynamics.

Figure 1—figure supplement 3.

(A) Schematic of increased inhibitory weights onto stimulus-specific assemblies upon the repeated presentation of stimuli A and B (indicated by dark blue and turquoise) relative to neurons from other assemblies (light blue). Both, excitatory and inhibitory neurons were pretuned to different stimulus features (black and gray borders). (B) Population average firing rate of all excitatory neurons as a function of time after the onset of a sequence block. Activity was averaged (solid line) across multiple non-repeated sequence blocks (transparent lines: individual blocks). A novel stimulus was presented as the last stimulus of the second-to-last sequence. (C) Top: Evolution of the average excitatory weights corresponding to all repeated and 10 sample novel stimuli. Colored traces mark three stimulus-specific assemblies in sequence 1: A, B, and C. The legend is shared with bottom panel. Bottom: Same as top panel for the average inhibitory weights. Arrows indicate end points of the pretraining, early, intermediate and late time points and whole stimulation phase. (D) Top, left: Population average firing rate of all excitatory neurons during the repeated presentation of sequence 1 at an early time point (see panel C, bottom). The novelty response can be seen at the end of the stimulation period. Bottom, left: Close-up of panel C, bottom (rectangle). Top, right: Same as panel top, left but at intermediate and late time points (see panel C, bottom). Bottom, right: Corresponding dynamics of the average inhibitory weights onto all three stimulus-specific assemblies from sequence 1 at early, intermediate and late time points (see panel C, bottom). The dark purple trace (early) corresponds to the average of the three colored traces in bottom, left panel. Time is locked to sequence onset. (E) Left: Weight matrix of average excitatory (E–to–E) weights after pretraining (see panel C), sorted by repeated and novel stimuli (repeated and novel stimuli are separated by the black lines). Right: Same as left after the repeated sequence stimulation paradigm (see final in panel C, top). The diagonal elements correspond to the traces plotted in C, top. (F) Left: Weight matrix of stimulus-specific inhibitory (I–to–E) weights after pretraining (see panel C, bottom), sorted by repeated and novel stimuli (repeated and novel stimuli are separated by the black lines). Right: Same as left after the repeated sequence stimulation paradigm (see final in panel C,bottom). .
Figure 1—figure supplement 4. Quantifying response density in the unique sequence stimulation paradigm.

Figure 1—figure supplement 4.

Single neuron statistics measured for 100 ms directly after the onset of the stimulus (onset), after the onset of the novel stimulus (novelty) and shortly before the novel stimulus is presented (adapted). (A) Fraction of active excitatory neurons (at least one spike in a 100 ms window) for the three different time-points of the stimulation paradigm. Horizontal line indicates the median, boxes are drawn between the 25th and 75th percentile, whiskers extend above and below the box to the most extreme data points that are within a distance to the box equal to 1.5 times the interquartile range and points indicate all data points. Each data point corresponds to a unique sequence block. (B) Same as A for inhibitory neurons. (C) Spike raster of all neurons sorted according to neuron ID during one sequence block. Time is locked to sequence block onset.
Figure 1—figure supplement 5. Normalization time step Δt does not affect the occurrence of a novelty response.

Figure 1—figure supplement 5.

(A) Population average firing rate of all excitatory neurons as a function of time after the onset of a sequence block for different normalization time steps Δt=[0.02s,0.5s,1s,2s,10s,50s]. Activity was averaged (solid line) across multiple non-repeated sequence blocks (transparent lines: individual blocks). A novel stimulus was presented as the last stimulus of the second-to-last sequence. (B) Left: Evolution of the average excitatory weights corresponding to 10 repeated stimuli with Δt=50s. Arrows indicate the time-point of normalization. Colored traces mark three stimulus-specific assemblies in sequence 1: A, B, and C. Right: Same as left panel for the average inhibitory weights.

To test the influence of inhibitory plasticity on the emergence of a novelty response, we followed an experimental paradigm used to study novelty responses in layer 2/3 (L2/3) of mouse primary visual cortex (V1) (Homann et al., 2017). In Homann et al., 2017, a single stimulus consisted of 100 randomly oriented Garbor patches. Three different stimuli (A, B, and C) were presented in a sequence (ABC) (Figure 1A, right). The same sequence (ABC) was then repeated several times in a sequence block. In the second-to-last sequence, the last stimulus was replaced by a novel stimulus (N). In the consecutive sequence block, a new sequence with different stimuli was presented (we refer to this as a unique sequence stimulation paradigm). The novel stimuli were also different for each sequence block. In this paradigm, we observed elevated population activity in the excitatory model population at the beginning of each sequence block (‘onset response’) and a steady reduction to a baseline activity level for the repeated sequence presentation (Figure 1B). Upon presenting a novel stimulus, the excitatory population showed excess activity, clearly discernible from baseline, called the ‘novelty response’. This novelty response was comparable in strength to the onset response. Sorting spike rasters according to sequence stimuli revealed that stimulation leads to high firing rates in the neurons that are selective to the presented stimulus (A, B, or C) (Figure 1C). When we used a different set of stimuli in the stimulation versus the pretraining phase to better match the randomly oriented Gabor patches presented in Homann et al., 2017 (Figure 1—figure supplement 3A, see Materials and methods), we found the same type of responses to repeated and novel stimuli (Figure 1—figure supplement 3B). When examining a random subset of neurons, we found general response sparseness and periodicity during sequence repetitions (Figure 1D), very similar to experimental findings (Homann et al., 2017). More concretely, sparse population activity for repeated stimuli in our model network was the result of each stimulus presentation activating a subset of excitatory neurons in the network, which were balanced by strong inhibitory feedback. Therefore, only neurons that directly received this feedforward drive were highly active, while most other neurons in the network were instead rather silent. Periodicity in the activity of single neurons resulted from the repetition of a sequence.

In the model, the fraction of active excitatory neurons was qualitatively similar for novel, adapted and onset stimuli (Figure 1—figure supplement 4). The relatively sparse novelty response in our model was the result of increased inhibition onto all excitatory neurons in the network, with activity remaining mainly in the neurons tuned to the novel stimulus. In contrast, Homann et al., 2017 found that a large fraction of neurons respond to a novel stimulus, suggesting a dense novelty response. Since the increase in inhibition seems to be responsible for the absence of a dense novelty response in our model, in a later section we suggest disinhibition as a mechanism to achieve the experimentally observed dense novelty responses in our model.

Our results suggest that presenting repeated stimuli (and repeated sequences of stimuli) to a plastic recurrent network with tuned excitatory and inhibitory neurons readily leads to a reduction of the excitatory averaged population response, consistent with the observed adaptation in multiple experimental studies in various animal models and brain regions (Ulanovsky et al., 2003; Hamm and Yuste, 2016; Homann et al., 2017). Importantly, the model network generates a novelty response when presenting a novel stimulus by increasing the excitatory population firing rate at the time of stimulus presentation (Näätänen et al., 2007).

The dynamics of novelty and onset responses depend on sequence properties

To explore the dynamics of novelty responses, we probed the model network with a modified stimulation paradigm. Rather than fixing the number of sequence repetitions in one sequence block (Figure 1A, right), here we presented a random number of sequence repetitions (nine values between 4 and 45 repetitions) for each sequence block. This allowed us to measure the novelty and onset responses as a function of the number of sequence repetitions. Novelty and onset responses were observed after as few as four sequence repetitions (Figure 2A). After more than 15 sequence repetitions, the averaged excitatory population activity reached a clear baseline activity level (Figure 2A). The novelty response amplitude, measured by the population rate of the novelty peak minus the baseline population rate, increased with the number of sequence repetitions before saturating for a high number of sequence repeats (Figure 2B, black dots). The onset response amplitude after the respective sequence block followed the same trend (Figure 2B, gray dots). Next, we varied the number of stimuli in a sequence, resulting in different sequence lengths across blocks (3 to 15 stimuli per sequence). By averaging excitatory population responses across sequence blocks with equal length, we found that the decay of the onset response depends on the number of stimuli in a sequence (Figure 2C). Upon fitting an exponentially decaying function to the activity of the onset response, we derived a linear relationship between the number of stimuli in a sequence and the decay constant (Figure 2D).

Figure 2. Dependence of the novelty response on the number of sequence repetitions and the sequence length.

Figure 2.

(A) Population average firing rate of all excitatory neurons for a different number of sequence repetitions within a sequence block. Time is locked to the sequence block onset. (B) The response amplitude of the onset (gray) and the novelty (black) response as a function of sequence repetitions fit with an exponential with a time constant τ. (C) Population average firing rate of all excitatory neurons for varying sequence length fit with an exponential function (red). Time is locked to the sequence block onset. (D) The onset decay time constant (fit with an exponential, as shown in panel C) as a function of sequence length. The simulated data was fit with a linear function with slope m. (B, D) Error bars correspond to the standard deviation across five simulated instances of the model.

In summary, we found that novelty responses arise for different sequence variations. Our model network suggests that certain features of the novelty response depend on the properties of the presented sequences. Changing the number of sequence repetitions modifies the onset and novelty response amplitude (Figure 2A,B), while a longer sequence length leads to a longer adaptation time constant (Figure 2C,D). Interestingly, both findings are in good qualitative agreement with experimental data that presented similar sequence variations (Homann et al., 2017). An exponential fit of the experimental data found a time constant of τ=3.2±0.7 repetitions when the number of sequence repetitions was varied (Homann et al., 2017). The time constant in our model network was somewhat longer (τ=9±1 repetitions), but on a similar order of magnitude (Figure 2B). Similarly, our model network produced a linear relationship between the adaptation time constant and sequence length with a slope of m=1.6±0.04 (Figure 2D), very close to the slope extracted from the data (m=2.1±0.3) (Homann et al., 2017). Therefore, grounded on biologically-plausible plasticity mechanisms, and capable of capturing the emergence and dynamics of novelty responses, our model network provides a suitable framework for a mechanistic dissection of the circuit contributions in the generation of a novelty response.

Stimulus periodicity in the sequence is not required for the generation of a novelty response

Experimental studies have often reported novelty or deviant responses by averaging across several trials due to poor signal-to-noise ratios of the measured physiological activity (Homann et al., 2017; Vinken et al., 2017). Therefore, we investigated the network response to paradigms with repeated individual sequence blocks (Figure 3A), which we refer to as the repeated sequence stimulation paradigm. We randomized the order of the sequence block presentation to avoid additional temporal structure beyond the stimulus composition of the sequences. Repeating sequence blocks dampened the onset response at sequence onset compared to the unique sequence stimulation paradigm (compare Figure 1B and Figure 2A,B with Figure 3A). Next, we wondered whether the excitatory and inhibitory population responses to repeated and novel stimuli are related. We found that both excitatory and inhibitory populations adapt to the repeated stimuli and show a prominent novelty peak that is larger than the respective averaged onset response (Figure 3B,C). Based on these findings, we make the following predictions for future experiments: (1) A novelty response is detectable in both the excitatory and inhibitory populations. (2) The sequence onset response is dampened for multiple presentations of the same sequence block compared to the presentation of unique sequence blocks.

Figure 3. Stimulus periodicity in the sequence is not required for the generation of a novelty response.

Figure 3.

(A–F) Population average firing rate of all excitatory neurons (and all inhibitory neurons in B,C) during the presentation of five different repeated sequence blocks. The population firing rate was averaged across ten repetitions of each sequence block. Time is locked to sequence block onset. (A) A novel stimulus was presented as the last stimulus of the second-to-last sequence. (B) Same as panel A but for both excitatory and inhibitory populations (transparent lines: individual sequence averages). (C) Comparison of baseline, novelty, and onset response for inhibitory and excitatory populations. Error bars correspond to the standard deviation across the five sequence block averages shown in B. (D) In the second-to-last sequence, the last and second-to-last stimulus were swapped instead of presenting a novel stimulus. (E) Within a sequence, stimuli were shuffled in a pseudo-random manner where a stimulus could not be presented twice in a row. A novel stimulus was presented as the last stimulus of the second-to-last sequence. (F) A novel stimulus was presented as the last stimulus of the second-to-last sequence. Each sequence had a different feedforward input drive for the novel stimulus, indicated by the percentage of the typical input drive for the novel stimulus used before.

Next, we investigated whether the generation of novelty responses observed in the model network depends on the sequence structure. If the novelty responses were to truly signal the violation of the sequence structure or the stimulus predictability in a sequence, we would expect a novelty response to occur if two stimuli in a sequence were swapped, that is, ACB instead of ABC. We found that swapping the last and second-to-last stimulus, instead of presenting a novel stimulus, does not elicit a novelty response (Figure 3D). Additionally, we asked whether the periodicity of the stimuli within a sequence influences the novelty response. Shuffling the stimuli within a sequence block still generates a novelty response and adaptation to the repeated stimuli, similar to the strictly periodic case (Figure 3E, compare to Figure 3A). Finally, we investigated if the novelty peak depends on the input firing rate of the novel stimulus. We found that a reduction of the input drive decreases the novelty peak, revealing a monotonic dependence of the novelty response on stimulus strength (Figure 3F). Based on these results, we make two additional predictions: (3) The periodicity of stimuli in the sequence is not required for the generation of a novelty response. Hence, the novelty response encodes the distribution of presented stimuli, rather than the structure of a sequence. (4) A novelty response depends on the strength of the novel stimulus.

Increased inhibition onto highly active neurons leads to adaptation

To gain an intuitive understanding for the sensitivity of novelty responses to stimulus identity but lack of sensitivity to stimulus periodicity in the sequence, we more closely examined the role of inhibitory plasticity as the leading mechanism behind the novelty responses in our model. We found that novelty responses arise because inhibitory plasticity fails to sufficiently increase inhibitory input and to counteract the excess excitatory input into excitatory neurons upon the presentation of a novel stimulus. In short, novelty responses can be understood as the absence of adaptation in an otherwise adapted response. Adaptation in the network arises through increased inhibition onto highly active neurons through selective strengthening of I-to-E weights (Figure 4A).

Figure 4. Inhibition onto neurons tuned to repeated stimuli increases during sequence repetitions.

(A) Schematic of increased inhibitory weights onto two stimulus-specific assemblies upon the repeated presentation of stimuli A and B (indicated in dark blue and turquoise) relative to neurons from other assemblies (light blue). (B) Evolution of the average inhibitory weights onto stimulus-specific assemblies. Colored traces mark three stimulus-specific assemblies in sequence 1: A, B, and C. Arrows indicate time points of early, intermediate, and late sequence block presentation shown in C and D. (C) Top: Population average firing rate of all excitatory neurons during the repeated presentation of sequence 1 at an early time point (see panel B). Time is locked to sequence onset. Bottom: Close-up of panel B (rectangle). Time is locked to sequence onset. (D) Top: Same as panel C (top) but at intermediate and late time points (see panel B). Bottom: Corresponding dynamics of the average inhibitory weights onto all three stimulus-specific assemblies from sequence 1 at early, intermediate and late time points (see panel B). The dark purple trace (early) corresponds to the average of the three colored traces in C (bottom).

Figure 4.

Figure 4—figure supplement 1. Pretraining parameters do not qualitatively influence the novelty response.

Figure 4—figure supplement 1.

(A) The novelty peak height as a function of the number of repetitions. The same 65 stimuli were presented during the pretraining and the consecutive stimulation paradigm (paradigm stimuli). (B) Evolution of the average inhibitory weights onto all stimulus-specific assemblies of repeated (full line) or novel (dashed line) stimuli. Shades of gray represent the average inhibitory weights for different number of repetitions of each stimulus. (C) The novelty peak height as a function of the total number of stimuli in the pretraining phase. Each of the 65 paradigm stimuli and n(0 to 100) additional stimuli are repeated five times during pretraining. (D) Same as panel B, where now shades of gray represent the average inhibitory weights for different number of stimuli.
Figure 4—figure supplement 2. Fast inhibitory plasticity is key for the generation of a novelty response.

Figure 4—figure supplement 2.

(A) Population average firing rate of all excitatory neurons as a function of time after the onset of a sequence block. Activity was averaged (solid line) across multiple non-repeated sequence blocks (transparent lines: individual blocks). A novel stimulus was presented as the last stimulus of the second-to-last sequence. Each panel shows the population average for a different inhibitory learning rate η. For reference, we use η=1pF in the remaining of the manuscript. (B) The response amplitude of the novelty response as a function of the inhibitory learning rate η. (C) The onset decay time constant (fit with an exponential) as a function of the inhibitory learning rate η. Error bars correspond to the standard deviation, dots are results from a single run.

To determine how inhibitory plasticity drives the generation of novelty responses or, equivalently, adaptation in our model, we studied the evolution of inhibitory weights. The inhibitory weights onto stimulus-specific assemblies tuned to the stimuli in a given sequence increased upon presentation of the corresponding sequence block, and decreased otherwise (Figure 4B). The population firing rate during repeated presentation of a sequence decreased (adapted) on the same timescale as the increase of the inhibitory weights related to this sequence (Figure 4C). When a stimulus was presented to the network for the first time, the total excitatory input to the corresponding excitatory neurons was initially not balanced by inhibition. Hence, the neurons within the assembly tuned to that stimulus exhibited elevated activity at sequence onset, leading to what we called the ‘onset response’ (Figure 1B). The same was true for the novelty responses as reflected in low inhibitory weights onto novelty assemblies relative to repeated assemblies (Figure 1—figure supplement 2D,E). Consequently, the generation of a novelty response did not depend on the specific periodicity of the stimuli within a sequence (Figure 3). Swapping two stimuli did not generate a novelty response since the corresponding assemblies of each stimulus were already in an adapted state. Therefore, our results suggest that the exact sequence structure of stimulus presentations is not relevant for the novelty response, as long as the overall distribution of stimuli is maintained.

Interestingly, we found that adaptation occurs on multiple timescales in our model. The fastest is the timescale of milliseconds on which inhibitory plasticity operates, the next slowest is the timescale of seconds corresponding to the presentation of a sequence block, and finally the slowest is the timescale of minutes corresponding to the presentation of the same sequence block multiple times (Figure 4D, top; also compare Figure 1B and Figure 3A). The slowest decrease in the population firing rate was the result of long-lasting changes in the average inhibitory weights onto the excitatory neurons tuned to the stimuli within a given sequence. Hence, the average inhibitory weight for a given sequence increased with the number of previous sequence block presentations of that sequence (Figure 4D, bottom).

Using a different set of stimuli in the stimulation versus the pretraining phase to match the randomly oriented Gabor patches presented in Homann et al., 2017, led to qualitatively similar firing rate and synaptic weight dynamics (Figure 1—figure supplement 3C,D, see also Materials and methods). Differences in the mean inhibitory weights onto different stimulus-specific assemblies in a given sequence were due to random initial differences in assembly size and connection strength (Figure 4B,C, see Materials and methods). Differences in early, intermediate, and late inhibitory weight changes, however, were consistent across different experiments and model instantiations (Figure 4D, Figure 1—figure supplement 3D, right).

Furthermore, we observed that the dynamics of inhibitory plasticity and the generation of a novelty response did not depend on the exact parameters of the pretraining phase (Figure 4—figure supplement 1). Specifically, increasing the number of repetitions in the pretraining phase increased the height of the novelty peak, but eventually reached a plateau at 10 repetitions (Figure 4—figure supplement 1A). Increasing the number of stimuli decreased the height of the novelty peak (Figure 4—figure supplement 1C). However, these pretraining parameters only affected some aspects of the novelty response, but preserved the generation of the novelty response. Even without a pretraining phase (zero number of repetitions), a novelty response could be generated.

Based on our result that inhibitory plasticity is the underlying mechanism of adapted and novelty responses in our model, we wondered how fast it needs to be. Hence, we tested the influence of the inhibitory learning rate (η) in the unique sequence stimulation paradigm. We found that inhibitory plasticity needs to be fast for both results, the generation of a novelty response (Figure 4—figure supplement 2A,B) and adaptation to repeated stimuli (Figure 4—figure supplement 2C). Whether such fast inhibitory plasticity operates in the sensory cortex to underlie the adapted and novelty responses is still unknown.

In summary, we identified the plasticity of connections from inhibitory to excitatory neurons belonging to a stimulus-specific assembly as the key mechanism in our framework for the generation of novelty responses and for the resulting adaptation of the network response to repeated stimuli. This adaptation occurs on multiple timescales, covering the range from the timescale of inhibitory plasticity (milliseconds) to sequence block adaptation (seconds) to the presentation of multiple sequence blocks (minutes).

The adapted response depends on the interval between stimulus presentations

Responses to repeated stimuli do not stay adapted but can recover if the repeated stimulus is no longer presented (Ulanovsky et al., 2004; Cohen-Kashi Malina et al., 2013). We investigated the recovery of adapted responses in the unique sequence stimulation paradigm (Figure 5A). Similar to Figure 2C, we changed the number of stimuli in the sequence, which leads to different inter-repetition intervals of a repeated sequence stimulus (the interval until the same stimulus is presented again). For example, if two repeated stimuli (A, B) are presented, the inter-repetition interval for each stimulus is 300 ms because each stimulus is presented for 300 ms (Figure 5C). If four repeated stimuli are presented (A, B, C, D), the inter-repetition interval for each stimulus is 900 ms. We defined the adaptation level as the difference of the onset population rate, measured at the onset of the stimulation, and the baseline rate, measured shortly before the presentation of a novel stimulus. We found that an increase in the inter-repetition interval reduced the adaptation level of the excitatory population (Figure 5A,D) due to a decrease of inhibitory synaptic strength onto stimulus-specific assemblies (Figure 5B,E). More specifically, the population average of all excitatory neurons tuned to stimulus A was high when stimulus A was presented and low when stimulus B was presented (Figure 5C). Hence, inhibitory weights onto stimulus-specific assembly A increased while A was presented and decreased otherwise (Figure 5B).

Figure 5. Longer inter-repetition intervals decrease the level of adaptation due to the recovery of inhibitory synaptic weights.

Figure 5.

(A) Population average firing rate of all excitatory neurons in the unique sequence stimulation paradigm for varying inter-repetition intervals (varying sequence length). Time is locked to the sequence block onset. (B) Evolution of the average inhibitory weights onto stimulus-specific assembly A (identical in all runs) for varying inter-repetition intervals. Time is locked to the sequence block onset. (C) Population average firing rate of stimulated excitatory neurons for a 300 ms inter-repetition interval. Time is locked to the sequence block onset. One step in the schematic corresponds to one stimulus in a presented sequence. (D) Difference of the onset population rate (measured at the onset of the stimulation, averaged across runs) and the baseline rate (measured before novelty response) as a function of the inter-repetition interval. (E) Absolute change of inhibitory weights onto stimulus-specific assembly A from the start until the end of a sequence block presentation as a function of inter-repetition interval.

In summary, longer inter-repetition intervals provide more time for the inhibitory weights onto stimulus-specific assemblies to decrease, hence, weakening the adaptation.

Inhibitory plasticity and tuned inhibitory neurons support stimulus-specific adaptation

Next, we investigated whether inhibitory plasticity of tuned inhibitory neurons support additional computational capabilities beyond the generation of novelty responses and adaptation of responses to repeated stimuli on multiple timescales. Therefore, we implemented a different stimulation paradigm to investigate the phenomenon of stimulus-specific adaptation (SSA). At the single-cell level, SSA typically involves a so-called oddball paradigm where two stimuli elicit an equally strong response when presented in isolation, but when one is presented more frequently, the elicited response is weaker than for a rarely presented stimulus (Natan et al., 2015).

We implemented a similar paradigm at the network level where the excitatory neurons corresponding to two stimuli A and B were completely overlapping and the inhibitory neurons were partially overlapping (Figure 6A). Upon presenting stimulus A several times, the neuronal response gradually adapted to the baseline level of activity, while presenting the oddball stimulus B resulted in an increased population response (Figure 6B). Therefore, this network was able to generate SSA. Even though stimuli A and B targeted the same excitatory cells, the network response adapted only to stimulus A, while generating a novelty response for stimulus B. Even after presenting stimulus B, activating stimulus A again preserved the adapted response (Figure 6B). This form of SSA exhibited by our model network is in agreement with many experimental findings in the primary auditory cortex, primary visual cortex, and multiple other brain areas and animal models (Nelken, 2014). In our model network, SSA could neither be generated with adaptive neurons and static synapses (Figure 6C, top; Materials and methods), nor with inhibitory plasticity without inhibitory tuning (Figure 6C, bottom). In fact, including an adaptive current in the model neurons (Brette and Gerstner, 2005) did not even lead to adaptation of the response to a frequent stimulus since firing rates rapidly adapted during stimulus presentation and completely recovered in the inter-stimulus pause (Figure 6C, top).

Figure 6. Stimulus-specific adaptation follows from inhibitory plasticity and tuning of both excitatory and inhibitory neurons.

(A) Stimuli A and B provided input to the same excitatory neurons (dark blue and turquoise). Some neurons in the inhibitory population were driven by both A and B (dark red and rose) and some by only one of the two stimuli (dark red or rose). (B,C) Population average firing rate of excitatory neurons over time while stimulus A was presented 20 times. Stimulus B was presented instead of A as the second-to-last stimulus. Time is locked to stimulation onset. (B) Top: Population average of all excitatory neurons in the network with inhibitory plasticity (iSTDP) and inhibitory tuning. Bottom: Population average of stimulated excitatory neurons only (stimulus-specific to A and B). (C) Top: Same as panel B (top) for neurons with an adaptive current in a non-plastic recurrent network. Bottom: Same as panel B (top) for the network with inhibitory plasticity (iSTDP) and no inhibitory tuning. (D) Weight evolution of stimulus-specific inhibitory weights corresponding to stimuli A and B and average inhibitory weights.

Figure 6.

Figure 6—figure supplement 1. Recovery of adapted responses in the SSA paradigm.

Figure 6—figure supplement 1.

The stimulus is shown nine times while the population response adapts. This is followed by a long pause, and then the same stimulus is shown once more. The inter-stimulus interval here is 900 ms, while stimuli are presented for 300 ms. (A) Population average of all excitatory neurons during repeated presentation of stimulus A followed by a short pause (9 s) and then another presentation of A (top). Weight evolution of stimulus-specific inhibitory weights during the stimulation paradigm (bottom). Time is locked to stimulus onset. (B) Same as A but with a much longer pause (225 s). The initial stimulation paradigm in panel A is the same as in panel B (indicated with the box).

We investigated the dynamics of inhibitory weights to understand the mechanism behind SSA in our model network. During the presentation of stimulus A, stimulus-specific inhibitory weights corresponding to stimulus A (average weights from inhibitory neurons tuned to stimulus A onto excitatory neurons tuned to stimulus A, see Figure 1—figure supplement 2A, right) increased their strength, while stimulus-specific inhibitory weights corresponding to stimulus B remained low (Figure 6D). Hence, upon presenting the oddball stimulus B, the stimulus-specific inhibitory weights corresponding to stimulus B remained sufficiently weak to keep the firing rate of excitatory neurons high, thus resulting in a novelty response.

We next asked about the recovery of the adapted response in this SSA paradigm (Figure 6—figure supplement 1). After a 9 s pause, the response remained adapted (Figure 6—figure supplement 1A). Only after more than 200 s the response fully recovered (Figure 6—figure supplement 1B). In contrast to the results in Figure 5, here, the adaptation level remained high due to the absence of network activity between stimulus presentations. Adaptation slowly recovered as the time between stimulus presentations increased.

In summary, our results suggest that the combination of inhibitory plasticity and inhibitory tuning can give rise to SSA. Previous work has argued that inhibition or inhibitory plasticity does not allow for SSA (Nelken, 2014). However, this is only true if inhibition is interpreted as a ‘blanket’ without any tuning in the inhibitory population. Including recent experimental evidence for tuned inhibition into the model, (Lee et al., 2014; Xue et al., 2014; Znamenskiy et al., 2018), can indeed capture the emergence of SSA.

Disinhibition leads to novelty response amplification and a dense population response

Beyond the bottom-up computations captured by the network response to the different stimuli, we next explored the effect of additional modulations or top-down feedback into our network model. Top-down feedback has been frequently postulated to signal the detection of an error or irregularity in the framework of predictive coding (Clark, 2013; Spratling, 2017). Therefore, we specifically tested the effect of disinhibitory signals on sequence violations by inhibiting the population of inhibitory neurons during the presentation of a novel stimulus (Figure 7A). Recent evidence has identified a differential disinhibitory effect in sensory cortex in the context of adapted and novelty responses (Natan et al., 2015). However, due to the scarcity of detailed knowledge about higher order feedback signals or within-layer modulations in this context, we did not directly model the source of disinhibition.

Figure 7. Disinhibition leads to a novelty response amplification and a dense population response.

Figure 7.

(A) Stimuli A and B provided input to the same excitatory neurons (dark blue and turquoise). Some neurons in the inhibitory population were driven by both A and B (dark red and rose) and some by only one of the two stimuli (dark red or rose). Inhibition (light green) of the entire inhibitory population led to disinhibition of the excitatory population. (B) Population average firing rate of all excitatory neurons over time while stimulus A is presented 20 times. Stimulus B was presented instead of A as the second-to-last stimulus. During the presentation of B, the inhibitory population was inhibited. Time is locked to stimulation onset. (C) Left: Raster plot of 250 excitatory neurons corresponding to the population average shown in panel B. The 50 neurons in the bottom part of the raster plot were tuned to stimuli A and B. Time is locked to stimulation onset. Right: Fraction of active excitatory neurons (at least one spike in a 100 ms window) measured directly after the onset of a stimulus. The raster plot and the fraction of active excitatory neurons are shown for the presentation of stimulus B (with disinhibition) and the preceding presentation of stimulus A (standard). (D) Population average peak height during disinhibition and the presentation of stimulus B, as a function of the disinhibition strength. Arrow indicates the population average peak height of the trace shown in panel B. Results are shown for five simulations. (E) Fraction of active excitatory neurons during disinhibition as a function of the disinhibition strength. Arrow indicates the data point corresponding to panel C. Results are shown for five simulations.

When repeating the SSA experiment (Figure 6) and applying such a disinhibitory signal (inhibition of the inhibitory population) at the time of the novel stimulus B, our model network amplified the novelty response (Figure 7B, shaded green, also compare to Figure 6B, top). Disinhibition also increased the density of the network response which corresponds to the number of active excitatory neurons (Figure 7C, left). Indeed, disinhibition increased the fraction of active excitatory neurons, which we defined as the fraction of neurons that spike at least once in a 100 ms window during the presentation of a stimulus (Figure 7C, right). Dense novelty responses have been recently reported experimentally, where novel stimuli elicited excess activity in a large fraction of the neuronal population in mouse V1 (Homann et al., 2017). Without a disinhibitory signal, the fraction of active neurons for a novel stimulus in our model was qualitatively similar as for repeated stimuli and therefore there was no dense novelty response (Figure 1—figure supplement 4A). Given that the inclusion of a disinhibitory signal readily increases the density of the novelty response, we suggest that disinhibition might underlie these experimental findings.

In sum, we found that by controlling the total disinhibitory strength (Materials and methods), disinhibition can flexibly amplify the novelty peak (Figure 7D) and increase the density of novelty responses (Figure 7E). Therefore, we propose that disinhibition can be a powerful mechanism to modulate novelty responses in a network of excitatory and inhibitory neurons.

Discussion

We developed a recurrent network model with plastic synapses to unravel the mechanistic underpinning of adaptive phenomena and novelty responses. Using the paradigm of repeated stimulus sequences (Figure 1A, right), our model network captured the adapted, sparse and periodic responses to repeated stimuli (Figure 1B–D) as observed experimentally (Fairhall, 2014; Homann et al., 2017). The model network also exhibited a transient elevated population response to novel stimuli (Figure 1B), which could be modulated by the number of sequence repetitions and the sequence length in the stimulation paradigm (Figure 2), in good qualitative agreement with experimental data (Homann et al., 2017). We proposed inhibitory synaptic plasticity as a key mechanism behind the generation of these novelty responses. In our model, repeated stimulus presentation triggered inhibitory plasticity onto excitatory neurons selective to the repeated stimulus, reducing the response of excitatory neurons and resulting in their adaptation (Figure 4). In contrast, for a novel stimulus inhibitory input onto excitatory neurons tuned to that stimulus remained low, generating the elevated novelty response. Furthermore, we showed that longer inter-repetition intervals led to the recovery of adapted responses (Figure 5).

Based on experimental evidence (Ohki and Reid, 2007; Griffen and Maffei, 2014), we included specific input onto both the excitatory and the inhibitory populations (Figure 1A, left). Such tuned inhibition (as opposed to untuned, ‘blanket’ inhibition commonly used in previous models) enabled the model network to generate SSA (Figure 6). Additionally, in the presence of tuned inhibition, a top-down disinhibitory signal achieved a flexible control of the amplitude and density of novelty responses (Figure 7). Therefore, besides providing a mechanistic explanation for the generation of adapted and novelty responses to repeated and novel sensory stimuli, respectively, our network model enabled us to formulate multiple experimentally testable predictions, as we describe below.

Inhibitory plasticity as an adaptive mechanism

We proposed inhibitory plasticity as the key mechanism that allows for adaptation to repeated stimulus presentation and the generation of novelty responses in our model. Many experimental studies have characterized spike-timing-dependent plasticity (STDP) of synapses from inhibitory onto excitatory neurons (Holmgren and Zilberter, 2001; Woodin et al., 2003; Haas et al., 2006; Maffei et al., 2006; Wang and Maffei, 2014; D'amour and Froemke, 2015; Field et al., 2020). In theoretical studies, network models usually include inhibitory plasticity to dynamically stabilize recurrent network dynamics (Vogels et al., 2011; Litwin-Kumar and Doiron, 2014; Zenke et al., 2015). In line with recent efforts to uncover additional functional roles of inhibitory plasticity beyond the stabilization of firing rates (Hennequin et al., 2017), here, we investigated potential functional consequences of inhibitory plasticity in adaptive phenomena. We were inspired by recent experimental work in the mammalian cortex (Chen et al., 2015; Kato et al., 2015; Natan et al., 2015; Hamm and Yuste, 2016; Natan et al., 2017; Heintz et al., 2020), and simpler systems, such as Aplysia (Fischer et al., 1997; Ramaswami, 2014) and in Drosophila (Das et al., 2011; Glanzman, 2011) along with theoretical reflections (Ramaswami, 2014; Barron et al., 2017), which all point towards a prominent role of inhibition and inhibitory plasticity in the generation of the MMN, SSA, and habituation. For example, Natan and colleagues observed that in the mouse auditory cortex, both PV and SOM interneurons contribute to SSA (Natan et al., 2015), possibly due to inhibitory potentiation (Natan et al., 2017). In the context of habituation, daily passive sound exposure has been found to lead to an upregulation of the activity of inhibitory neurons (Kato et al., 2015). Furthermore, increased activity to a deviant stimulus in the MMN is diminished when inhibitory neurons are suppressed (Hamm and Yuste, 2016).

Most experimental studies on inhibition in adaptive phenomena have not directly implicated inhibitory plasticity as the relevant mechanism. Instead, some studies have suggested that the firing rate of the inhibitory neurons changes, resulting in more inhibitory input onto excitatory cells, effectively leading to adaptation (Kato et al., 2015). In principle, there can be many other reasons why the inhibitory input increases: disinhibitory circuits, modulatory signals driving specific inhibition, or increased synaptic strength of excitatory-to-inhibitory connnections, to name a few. However, following experimental evidence (Natan et al., 2017) and supported by our results, the plasticity of inhibitory-to-excitatory connections emerges as a top candidate underlying adaptive phenomena. In our model, adaptation to repeated stimuli and the generation of novelty responses via inhibitory plasticity do not depend on the exact shape of the inhibitory STDP learning rule. It is only important that inhibitory plasticity generates a ‘negative feedback’ whereby high excitatory firing rates lead to net potentiation of inhibitory synapses while low excitatory firing rates lead to net depression of inhibitory synapses. Other inhibitory STDP learning rules can also implement this type of negative feedback (Luz and Shamir, 2012; Kleberg et al., 2014), and we suspect that they would also generate the adapted and novelty responses as in our model.

One line of evidence to speak against inhibitory plasticity argues that SSA might be independent of NMDA activation (Farley et al., 2010). Inhibitory plasticity, on the contrary, seems to be NMDA receptor-dependent (D'amour and Froemke, 2015; Field et al., 2020). However, there exists some discrepancy in how exactly NMDA receptors are involved in SSA (Ross and Hamm, 2020), since blocking NMDA receptors can disrupt the MMN (Tikhonravov et al., 2008; Chen et al., 2015). These results indicate that a further careful disentanglement of the underlying cellular mechanisms of adaptive phenomena is needed.

In our model, the direction of inhibitory weight change (iLTD or iLTP) depends on the firing rate of the postsynaptic excitatory cells (see Vogels et al., 2011). Postsynaptic firing rates above a ‘target firing rate’ will on average lead to iLTP, while postsynaptic firing rates below the target rate will lead to iLTD. In turn, the average magnitude of inhibitory weight change depends on the firing rate of the presynaptic inhibitory neurons (see Vogels et al., 2011). Therefore, if the background activity between stimulus presentations in our model is very low, recovery from adaptation will only happen on a very slow timescale (as in Figure 6—figure supplement 1). However, if the activity between stimulus presentations is higher (either because of a higher background firing rate or because of evoked activity from other sources, for example other stimuli), the adapted stimulus can recover faster (as in Figure 5). Therefore, we conclude that our model can capture the reduced adaptation for longer inter-stimulus intervals as found in experiments (Ulanovsky et al., 2004; Cohen-Kashi Malina et al., 2013) when background activity in the inter-stimulus interval is elevated.

Alternative mechanisms can account for adapted and novelty responses

Undoubtedly, mechanisms other than inhibitory plasticity might underlie the difference in network response to repeated and novel stimuli. These mechanisms can be roughly summarized in two groups: mechanisms which are unspecific, and mechanisms which are specific to the stimulus. Two examples of unspecific mechanisms are intrinsic plasticity and an adaptive current. Intrinsic plasticity is a form of activity-dependent plasticity, adjusting the neurons’ intrinsic excitability (Debanne et al., 2019) and has been suggested to explain certain adaptive phenomena (Levakova et al., 2019). Other models at the single neuron level incorporate an additional current variable, the adaptive current, which increases for each postsynaptic spike and decreases otherwise. This adaptive current leads to a reduction of the neuron’s membrane potential after a spike (Brette and Gerstner, 2005). However, any unspecific mechanism can only account for firing-rate adaptation but not for SSA (Nelken, 2014; Figure 6C). Examples of stimulus-specific mechanisms are short-term plasticity and long-term plasticity of excitatory synapses. Excitatory short-term depression, usually of thalamocortical synapses, is the most widely hypothesized mechanism to underlie adaptive phenomena in cortex (Nelken, 2014).

Short-term plasticity (Abbott, 1997; Tsodyks et al., 1998) has been implicated in a number of adaptation phenomena in different sensory cortices and contexts. One example is an already established model to explain SSA, namely the ‘Adaptation of Narrowly Tuned Modules’ (ANTM) model (Nelken, 2014; Khouri and Nelken, 2015). This model has been extensively studied in the context of adaptation to tone frequencies (Mill et al., 2011a; Taaseh et al., 2011; Mill et al., 2012; Hershenhoren et al., 2014). Models based on short-term plasticity have also been extended to recurrent networks (Yarden and Nelken, 2017) and multiple inhibitory sub-populations (Park and Geffen, 2020). Experimental work has shown that short-term plasticity can be different at the synapses from PV and SOM interneurons onto pyramidal neurons, and can generate diverse temporal responses (facilitated, depressed and stable responses) in pyramidal neurons in the auditory cortex (Seay et al., 2020). Short-term plasticity can also capture the differences in responses to periodic versus random presentation of repeated stimuli in a sequence (Yaron et al., 2012; Chait, 2020). Finally, short-term plasticity has been suggested to explain a prominent phenomenon in the auditory cortex, named ‘forward masking’ (Brosch and Schreiner, 1997), in which a preceding masker stimulus influences the response to a following stimulus (Phillips et al., 2017). This highlights short-term plasticity as a key player in adaptive processes in the different sensory cortices, although it likely works in tandem with long-term plasticity.

Timescales of plasticity mechanisms

The crucial parameter for the generation of adaptation based on short-term plasticity is the timescale of the short-term plasticity mechanism. Experimental studies find adaptation timescales from hundreds of milliseconds to tens of seconds (Ulanovsky et al., 2004; Lundstrom et al., 2010; Homann et al., 2017; Latimer et al., 2019), and in the case of habituation even multiple days (Haak et al., 2014; Ramaswami, 2014). At the same time, the timescales of short-term plasticity can range from milliseconds to minutes (Zucker and Regehr, 2002). Hence, explaining the different timescales of adaptive phenomena would likely require a short-term plasticity timescale that can be dynamically adjusted. Our work shows that inhibitory plasticity can readily lead to adaptation on multiple timescales without the need for any additional assumptions (Figure 4). However, it is unclear whether inhibitory plasticity can act sufficiently fast to explain adaptation phenomena on the timescale of seconds, as in our model (Figure 4C,D). Most computational models of recurrent networks with plastic connections rely on fast inhibitory plasticity to stabilize excitatory rate dynamics (Sprekeler, 2017; Zenke et al., 2017). Decreasing the learning rate of inhibitory plasticity five-fold eliminates the adaptation to repeated stimuli and the novelty response in our model (Figure 4—figure supplement 2). Experimentally, during the induction of inhibitory plasticity, spikes are paired for several minutes and it takes several tens of minutes to reach a new stable baseline of inhibitory synaptic strength (D'amour and Froemke, 2015; Field et al., 2020). Nonetheless, inhibitory postsynaptic currents increase significantly immediately after the induction of plasticity (see e.g. D'amour and Froemke, 2015; Field et al., 2020). This suggests that changes of inhibitory synaptic strength already occur while the plasticity induction protocol is still ongoing. Hence, we propose that inhibitory long-term plasticity is a suitable, though not the only, candidate to explain the generation of novelty responses and adaptive phenomena over multiple timescales.

Robustness of the model

We probed our findings against key parameters and assumptions in our model. First, we tested if the specific choice of pretraining parameters and complexity of presented stimuli affects the generation of adapted and novelty responses. Varying the pretraining duration and the number of pretraining stimuli did not qualitatively change the novelty response and its properties (Figure 4—figure supplement 1). In addition, presenting different stimuli in the stimulation phase compared to the pretraining phase (Materials and methods) to mimic the scenario of randomly oriented Gabor patches in Homann et al., 2017, preserved the adaptation to repeated stimuli and the generation of a novelty response (Figure 1—figure supplement 3).

Second, we explored how the timescale of inhibitory plasticity and of the normalization mechanism affects the generation of adapted and novelty responses. In many computational models, normalization mechanisms are often justified by experimentally observed synaptic scaling. In our model, like in most computational work, the timescale of this normalization was much faster than synaptic scaling (Zenke et al., 2017). However, slowing normalization down did not affect the generation of adapted and novelty responses (Figure 1—figure supplement 5). Since the change in inhibitory synaptic weights through iSTDP is the key mechanism behind the generation of adapted and novelty responses, the speed of normalization was not crucial as it only affected the excitatory and not the inhibitory weights. In contrast, we found that the learning rate of inhibitory plasticity needs to be ‘sufficiently fast’ . Slow inhibitory plasticity failed to homeostatically stabilize firing rates in the network. Hence, the network no longer showed an adapted response to repeated stimuli and novelty responses became indiscernible from noise (Figure 4—figure supplement 2).

Disinhibition as a mechanism for novelty response amplification

Upon including a top-down disinhibitory signal in our model network, we observed: (1) an active amplification of the novelty response (Figure 7B); (2) a dense novelty response (Figure 7C), similar to experimental findings (Homann et al., 2017) (without a disinhibitory signal, the novelty response was not dense, see Figure 1—figure supplement 4); and (3) a flexible manipulation of neuronal responses through a change in the disinhibitory strength (Figure 7D,E).

In our model, we were agnostic to the mechanism that generates disinhibition. However, at least two possibilities exist in which the inhibitory population can be regulated by higher-order feedback to allow for disinhibition. First, inhibitory neurons in primary sensory areas can be shaped by diverse neuromodulatory signals, which allow for subtype-specific targeting of inhibitory neurons (Froemke, 2015). Second, higher order feedback onto layer 1 inhibitory cells could mediate the behavioral relevance of the adapted stimuli through a disinhibitory pathway (Letzkus et al., 2011; Wang and Yang, 2018). Hence, experiments that induce disinhibition either via local mechanisms within the same cortical layer or through higher cortical feedback can provide a test for our postulated role for disinhibition.

In our model, the disinhibitory signal was activated instantaneously. If such additional feedback signals do indeed exist in the brain that signal the detection of higher-order sequence violations, we expect them to arise with a certain delay. Carefully exploring if the dense responses arise with a temporal delay accounting for higher-order processing and projection back to primary sensory areas might shed light on distributed computations upon novel stimuli. These experiments would probably require recording methods on a finer temporal scale than calcium imaging.

Experimental data which points towards a flexible modulation of novelty and adapted responses already exists. The active amplification of novelty responses generated by our model is consistent with some experimental data (Taaseh et al., 2011; Hershenhoren et al., 2014; Hamm and Yuste, 2016; Harms et al., 2016), but see also Vinken et al., 2017. Giving a behavioral meaning to a sound through fear conditioning has been shown to modify SSA (Yaron et al., 2020). Similarly, contrast adaptation has been shown to reverse when visual stimuli become behaviorally relevant (Keller et al., 2017). Other studies have also shown that as soon as a stimulus becomes behaviorally relevant, inhibitory neurons decrease their response and therefore disinhibit adapted excitatory neurons (Kato et al., 2015; Makino and Komiyama, 2015; Hattori et al., 2017). Attention might lead to activation of the disinhibitory pathway, allowing for a change in the novelty response compared to the unattended case, as suggested in MMN studies (Sussman et al., 2014). Especially in habituation, the idea that a change in context can assign significance to a stimulus and therefore block habituation, leading to ‘dehabituation’, is widely accepted (Ramaswami, 2014; Barron et al., 2017).

Hence, we suggest that disinhibition is a flexible mechanism to control several aspects of novelty responses, including the density of the response, which might be computationally important in signaling change detection to downstream areas (Homann et al., 2017). Altogether, our results suggest that disinhibition is capable of accounting for various aspects of novelty responses that cannot be accounted for by bottom-up computations. The functional purpose of a dense response to novel stimuli are yet to be explored.

Functional implications of adapted and novelty responses

In theoretical terms, our model is an attractor network. It differs from classic attractor models where inhibition is considered unspecific (like a ‘blanket’) (Amit and Brunel, 1997). Computational work is starting to uncover the functional role of specific inhibition in static networks (Rost et al., 2018; Najafi et al., 2020; Rostami et al., 2020) as well as the plasticity mechanisms that allow for specific connectivity to emerge (Mackwood et al., 2021). These studies have argued that inhibitory assemblies can improve the robustness of attractor dynamics (Rost et al., 2018) and keep a local balance of excitation and inhibition (Rostami et al., 2020). We showed that specific inhibitory connections readily follow from a tuned inhibitory population (Figure 1A, Figure 1—figure supplement 2). Our results suggest that adaptation is linked to a stimulus-specific excitatory/inhibitory (E/I) balance. Presenting a novel stimulus leads to a short-term disruption of the E/I balance, triggering inhibitory plasticity, which aims to restore the E/I balance (Figure 4; Vogels et al., 2011; D'amour and Froemke, 2015; Field et al., 2020). Disinhibition, which effectively disrupts the E/I balance, allows for flexible control of adapted and novelty responses (Figure 7). This links to the notion of disinhibition as a gating mechanism for learning and plasticity (Froemke et al., 2007; Letzkus et al., 2011; Kuhlman et al., 2013).

A multitude of functional implications have been suggested for the role of adaptation (Weber et al., 2019; Snow et al., 2017). We showed that one of these roles, the detection of unexpected (or novel) events, follows from the lack of selective adaptation to those events. A second, highly considered functional implication is predictive coding. In the predictive coding framework, the brain is viewed as an inference or a prediction machine. It is thought to generate internal models of the world which are compared to the incoming sensory inputs (Bastos et al., 2012; Clark, 2013; Friston, 2018). According to predictive coding, the overall goal of our brain is to minimize the prediction error, that is the difference between the internal prediction and the sensory input (Rao and Ballard, 1999; Clark, 2013; Friston, 2018). Most predictive coding schemes hypothesize the existence of two populations of neurons. First, prediction error units that signal a mismatch between the internal model prediction and the incoming sensory stimuli. And second, a prediction population unit that reflects what the respective layer ‘knows about the world’ (Rao and Ballard, 1999; Clark, 2013; Spratling, 2017). Our model suggests that primary sensory areas allow for bottom-up detection of stimulus changes without the need for an explicit population of error neurons or an internal model of the world. However, one could also interpret the state of all inhibitory synaptic weights as an implicit internal model of the recent frequency of various events in the environment.

Predictions and outlook

Our approach to mechanistically understand the generation of adapted and novelty responses leads to several testable predictions. First, the most general implication from our study is that inhibitory plasticity might serve as an essential mechanism underlying many adaptive phenomena. Our work suggests that inhibitory plasticity allows for adaptation on multiple timescales, ranging from the adaptation to sequence blocks on the timescale of seconds to slower adaptation on the timescale of minutes, corresponding to repeating multiple sequence blocks (Figure 4C,D). A second prediction follows from the finding that both excitatory and inhibitory neuron populations show adaptive behavior and novelty responses (Figure 3B,C). Adaptation of inhibitory neurons on the single-cell level has already been verified experimentally (Chen et al., 2015; Natan et al., 2015). Third, we further predict that a violation of the sequence order does not lead to a novelty response. Therefore, the novelty response should not be interpreted as signaling a violation of the exact sequence structure (Figure 3D,E). However, previous work has found a reduction in the response to repeated stimuli if the stimuli are presented periodically, rather than randomly, in a sequence (Yaron et al., 2012) (but see Mehra et al., 2021). Fourth, the height of the novelty peak in the population average depends on the input drive, where decreasing the input strength decreases the novelty response (Figure 3F). This could be tested, for example, in the visual system, by presenting visual stimuli with different contrasts.

In our modeling approach, we did not distinguish between different subtypes of inhibitory neurons. This assumption is certainly an oversimplification. The main types of inhibitory neurons, parvalbumin-positive (PV), somatostatin-positive (SOM), and vasoactive intestinal peptide (VIP) expressing neurons, differ in their connectivity and their hypothesized functional roles (Tremblay et al., 2016). This is certainly also true for adaptation, and computational studies have already started to tackle this problem (Park and Geffen, 2020; Seay et al., 2020). Studies of the influence of inhibitory neurons on adaptation have shown that different interneuron types have unique contributions to adaptation (Kato et al., 2015; Natan et al., 2015; Hamm and Yuste, 2016; Natan et al., 2017; Garrett et al., 2020; Heintz et al., 2020). It would be interesting to explore the combination of microcircuit connectivity of excitatory neurons, PVs, SOMs, and VIPs with subtype-specific short-term (Seay et al., 2020; Phillips et al., 2017) and long-term inhibitory plasticity mechanisms (Agnes et al., 2020) on the generation and properties of novelty responses.

In sum, we have proposed a mechanistic model for the emergence of adapted and novelty responses based on inhibitory plasticity, and the regulation of this novelty response by top-down signals. Our findings offer insight into the flexible and adaptive responses of animals in constantly changing environments, and could be further relevant for disorders like schizophrenia where adapted responses are perturbed (Hamm et al., 2017).

Materials and methods

We built a biologically plausible spiking neuronal network model of the mammalian cortex based on recent experimental findings on tuning, connectivity, and synaptic plasticity. The model consists of 4000 excitatory exponential integrate-and-fire (EIF) neurons and 1000 inhibitory leaky integrate-and-fire (LIF) neurons (Table 1). Excitatory (E) and inhibitory (I) neurons were randomly recurrently connected (Table 2). Excitatory-to-excitatory and inhibitory-to-excitatory connections were plastic (see below). In addition, excitatory-to-excitatory weight dynamics were stabilized by a homeostatic mechanism (Fiete et al., 2010), which preserved the total sum of all incoming synaptic weights into an excitatory neuron. All other synapses in the network were fixed. Both excitatory and inhibitory neurons received an excitatory baseline feedforward input in the form of Poisson spikes. Furthermore, different subsets of excitatory and inhibitory neurons received excess input with elevated Poisson rate to model the presentation of stimuli (see below, Figure 1A, left; Table 4).

Table 1. Parameters for the excitatory (EIF) and inhibitory (LIF) membrane dynamics (Litwin-Kumar and Doiron, 2014).

Symbol Description Value
NE Number of E neurons 4000
NI Number of I neurons 1000
τE,τI E, I neuron resting membrane time constant 20 ms
VrestE E neuron resting potential - 70 mV
VrestI I neuron resting potential - 62 mV
ΔT Slope factor of exponential 2 mV
C Membrane capacitance 300 pF
gL Membrane conductance C/τE
VrevE E reversal potential 0 mV
VrevI I reversal potential - 75 mV
Vthr Threshold potential - 52 mV
Vpeak Peak threshold potential 20 mV
Vreset E, I neuron reset potential - 60 mV
τabs E, I absolute refractory period 1 ms

Table 2. Parameters for feedforward and recurrent connections (Litwin-Kumar and Doiron, 2014).

Symbol Description Value
p Connection probability 0.2
τriseE Rise time for E synapses 1 ms
τdecayE Decay time for E synapses 6 ms
τriseI Rise time for I synapses 0.5 ms
τdecayI Decay time for I synapses 2 ms
r¯extEE Avg. rate of external input to E neurons 4.5 kHz
r¯extIE Avg. rate of external input to I neurons 2.25 kHz
JminEE Minimum E to E synaptic weight 1.78 pF
JmaxEE Maximum E to E synaptic weight 21.4 pF
J0EE Initial E to E synaptic weight 2.76 pF
JminEI Minimum I to E synaptic weight 48.7 pF
JmaxEI Maximum I to E synaptic weight 243 pF
J0EI Initial I to E synaptic weight 48.7 pF
JIE Synaptic weight from E to I 1.27 pF
JII Synaptic weight from I to I 16.2 pF
JEEx Synaptic weight from external input population to E 1.78 pF
JIEx Synaptic weight from external input population to I 1.27 pF

Dynamics of synaptic conductances and the membrane potential

The membrane dynamics of each excitatory neuron was modeled as an exponential integrate-and-fire (EIF) neuron model (Fourcaud-Trocmé et al., 2003):

CddtV(t)=-gL(V(t)-VrestE)+gLΔTexp(V(t)-VTΔT)-gEE(t)(V(t)-VrevE)-gEI(t)(V(t)-VrevI), (1)

where V(t) is the membrane potential of the modeled neuron, C the membrane capacitance, gL the membrane conductance, and ΔT is the slope factor of the exponential rise. The membrane potential was reset to Vreset once the diverging potential reached the threshold peak voltage Vpeak. Inhibitory neurons were modeled via a leaky-integrate-and-fire neuron model

CddtV(t)=-gL(V(t)-VrestI)-gIE(t)(V(t)-VrevE)-gII(t)(V(t)-VrevI). (2)

Once the membrane potential reached the threshold voltage Vthr, the membrane potential was reset to Vreset. The absolute refractory period was modeled by clamping the membrane voltage of a neuron that just spiked to the reset voltage Vreset for the duration τabs. In this study, we did not model additional forms of adaptation, such as adaptive currents or spiking threshold VT adaptation. To avoid extensive parameter tuning, we used previously published parameter values (Litwin-Kumar and Doiron, 2014; Table 1).

We compared this model to one where we froze plasticity and included adaptive currents wadapt (Figure 6C, top). We modeled this by subtracting wadapt(t) on the right hand side of Equation 1 (Brette and Gerstner, 2005). Upon a spike, wadapt(t) increased by bw and the sub-threshold dynamics of the adaptive current were described by τwddtwadapt(t)=-wadapt(t)+aw(V(t)-VrestE), where aw = 4 nS denotes the subthreshold and bw = 80.5 pA the spike-triggered adaptation. The adaptation time scale was set to τw = 150 ms.

The conductance of neuron i which is part of population X and is targeted by another neuron in population Y was denoted with giXY. Both X and Y could refer either to the excitatory or inhibitory population, that is X,Y[E,I]. The shape of the synaptic kernels F(t) was a difference of exponentials and differed for excitatory and inhibitory input depending on the rise and decay times τdecayY and τriseY:

FY(t)=etτdecayYetτriseYτdecayYτriseY. (3)

This kernel was convolved with the total inputs to neuron i weighted with the respective synaptic strength to yield the total conductance

giXY(t)=FY(t)*(JextXYsi,extXY(t)+jJijXYsjY(t)), (4)

where sjY(t) is the spike train of neuron j in the network and si,extXY denotes the spike train of the external input to neuron i. The external spike trains were generated in an independent homogeneous Poisson process. The synaptic strength from the input neurons to the network neurons, JextXY, was assumed to be constant.

Excitatory and inhibitory plasticity

We implemented the plasticity from an excitatory to an excitatory neuron JEE based on the triplet spike-time-dependent plasticity rule (triplet STDP), which uses triplets of pre- and postsynaptic spikes to evoke synaptic change (Sjöström et al., 2001; Pfister and Gerstner, 2006). The addition of a third spike for the induction of synaptic plasticity modifies the amount of potentiation and depression induced by the classical pair-based STDP, where pairs of pre- and postsynaptic spikes induce plasticity based on their timing and order (Bi and Poo, 1998). The triplet eSTDP rule has been shown to capture the dependency of plasticity on firing rates found experimentally, whereby a high frequency of pre- and postsynaptic spike pairs leads to potentiation rather than no synaptic change as predicted by pair-based STDP (Sjöström et al., 2001; Pfister and Gerstner, 2006; Gjorgjieva et al., 2011; Table 3). In the triplet rule, four spike accumulators, r1,r2,o1, and o2, increase by one, once a spike of the corresponding neuron occurs and otherwise decrease exponentially depending on their respective time constant τ+,τx,τ,andτy:

dr1(t)dt=r1(t)τ+ift=tprethenr1r1+1,dr2(t)dt=r2(t)τxift=tprethenr2r2+1,do1(t)dt=o1(t)τift=tposttheno1o1+1,do2(t)dt=o2(t)τyift=tposttheno2o2+1. (5)

Table 3. Parameters for the implementation of Hebbian and homeostatic plasticity (Pfister and Gerstner, 2006; Litwin-Kumar and Doiron, 2014).

Symbol Description Value
τ- Time constant of pairwise pre-synaptic detector (+) 33.7 ms
τ+ Time constant of pairwise post-synaptic detector (-) 16.8 ms
τx Time constant of triplet pre-synaptic detector (-) 101 ms
τy Time constant of triplet post-synaptic detector (+) 125 ms
A2+ Pairwise potentiation amplitude 7.5×10-10 pF
A3+ Triplet potentiation amplitude 9.3×10-3 pF
A2- Pairwise depression amplitude 7×10-3 pF
A3- Triplet depression amplitude 2.3×10-4 pF
τyinhib Time constant of low-pass filtered spike train 20 ms
η Inhibitory plasticity learning rate 1 pF
r0 Target firing rate 3 Hz

The E-to-E weights were updated as

ΔJEE(t)=o1(t)[A2+A3r2(tϵ)]ift=tpre,ΔJEE(t)=r1(t)[A2++A3+o2(tϵ)]ift=tpost, (6)

where the A+,A- corresponds to the excitatory LTP or LTD amplitude, and the subscript refers to the triplet (3) or pairwise term (2). The parameter ϵ>0 ensures that the weights are updated prior to increasing the respective spike accumulators by 1. Spike detection was modeled in an all-to-all approach.

The plasticity of inhibitory-to-excitatory connections, JEI, was modeled based on a symmetric inhibitory pairwise STDP (iSTDP) rule, initially suggested on theoretical grounds for its ability to homeostatically stabilize firing rates in recurrent networks (Vogels et al., 2011). According to this rule, the timing but not the order of pre- and postsynaptic spikes matters for the induction of synaptic plasticity. Other inhibitory rules have also been measured experimentally, including classical Hebbian and anti-Hebbian (e.g. Holmgren and Zilberter, 2001; Woodin et al., 2003; Haas et al., 2006; for a review see Hennequin et al., 2017), and some may even depend on the type of the interneuron (Udakis et al., 2020). We chose the iSTDP rule because it can stabilize excitatory firing rate dynamics in recurrent networks (Vogels et al., 2011; Litwin-Kumar and Doiron, 2014) and was recently verified to operate in the auditory cortex of mice (D'amour and Froemke, 2015). The plasticity parameters are shown in Table 3. The two spike accumulators yE/I, for the inhibitory pre- and the excitatory post-synaptic neuron, have the same time constant τyinhib. Their dynamics were described by

dyI(t)dt=yI(t)τyinhibift=tpre/IthenyIyI+1anddyE(t)dt=yE(t)τyinhibift=tpost/EthenyEyE+1. (7)

The I-to-E weights were updated as 

ΔJijEI(t)=η(yiE(t)2r0τyinhib)ift=tpre/IΔJijEI(t)=ηyjI(t)ift=tpost/E, (8)

where η is the learning rate, and r0 corresponds to the target firing rate of the excitatory neuron. In Figure 4—figure supplement 2 we investigated the inhibitory learning rate η. Figure 1—figure supplement 1 shows the excitatory and inhibitory STDP rules for different pairing frequencies.

Additional homeostatic mechanisms

Inhibitory plasticity alone is considered insufficient to prevent runaway activity in this network implementation. Hence, additional mechanisms were implemented that also have a homeostatic effect. To avoid unlimited weight increase, the synaptic weights were bound from below and from above, see Table 2. Subtractive normalization ensured that the total synaptic input to an excitatory neuron remains constant throughout the simulation. This was implemented by scaling all incoming weights to each neuron every Δt= 20 ms according to

ΔJijEE(t)=-jJijEE(t)-jJijEE(0)NiE, (9)

where i is the index of the post-synaptic and j of the pre-synaptic neurons. NiE is the number of excitatory connections onto neuron i (Fiete et al., 2010). In Figure 1—figure supplement 5 we investigated the effect of the normalization timestep Δt on the novelty response.

Stimulation protocol

All neurons received external excitatory baseline input. The baseline input to excitatory neurons rextE was higher than the input to inhibitory neurons rextI (Table 4). An external input of rextE=4.5 kHz can be interpreted as 1000 external presynaptic neurons with average firing rates of 4.5 Hz (compare Litwin-Kumar and Doiron, 2014).

Table 4. Parameters for the stimulation paradigm and stimulus tuning.

Symbol Description Value
rextE External baseline input to E 4.5 kHz
rextI External baseline input to I 2.25 kHz
rstimE Additional input to E during stimulus presentation 12 kHz
rstimI Additional input to I during stimulus presentation 1.2 kHz
rdisinhI Additional input to I during disinhibition −1.5 kHz
pmemberE Probability for an E neuron to be driven by a stimulus 5%
pmemberI Probability for an I neuron to be driven by a stimulus 15%

The stimulation paradigm was inspired by a recent study in the visual system (Homann et al., 2017). In Homann et al., 2017, the stimulation consisted of images with 100 randomly chosen, superimposed Gabor patches. Rather than explicitly modeling oriented and spatially localized Gabor patches, in our model, stimuli that correspond to Gabor patches of a given orientation were implemented by simultaneously co-activating subsets of cells by strongly driving them. Hence, the model analog of the presentation of a sensory stimulus, in our experiments, is increased input to a subset of neurons. Every time a particular stimulus is presented again, the same set of neurons receives strong external stimulation, rstimE and rstimI. Therefore, while a stimulus in our stimulation paradigm is functionally similar to presenting Gabor patches with similar orientations, it does not represent the Gabor patches themselves.

We first implemented a pretraining phase. In this phase, we sequentially stimulated subsets of neurons that are driven by all stimuli (repeated and novel stimuli) eventually used in the stimulation phase. The stimuli were presented in random order, leading to a change in network connectivity that is only stimulus but not sequence-dependent (Figure 4B, first 100 s shown here for five repetitions of each stimulus). Hence, the pretraining phase is a phenomenological model of the development process to generate a structure in the network connections prior to the actual stimulation paradigm. This can be interpreted as imprinting a ‘backbone’ of orientation selective neurons, where cells which are selective to similar features (e.g. similar orientations) become strongly connected due to synaptic plasticity (as seen in experiments, see for e.g. Ko et al., 2011; Ko et al., 2013).

Next, we implemented a stimulation phase where we presented the same stimuli used during the pretraining phase according to the repeated sequence stimulation paradigm. To match the randomly oriented Gabor patches presented in Homann et al., 2017, we also performed additional simulations where in the stimulation phase we activated different, randomly chosen, subsets of neurons (Figure 1—figure supplement 3) (note that there is some overlap with the imprinted orientation selective subsets).

In the standard repeated sequence stimulation paradigm (Figure 3 and Figure 4), a total of 65 stimuli were presented (5 x 3 repeated + 5 x 10 novel stimuli) during pretraining. In Figure 4—figure supplement 1, we tested if changes in the pretraining phase, such as a change in the number of repetitions of each stimulus or the total number of stimuli, affect our results.

The timescales of the experimental paradigm in Homann et al., 2017 and the model paradigm were matched, that is the neurons tuned to a stimulus received additional input for 300 ms simulation time. Stimuli were presented without pauses in between, corresponding to continuous stimulus presentation without blank images (visual) or silence (auditory) between sequence blocks. Table 4 lists the stimulus parameters.

In contrast to several previous plastic recurrent networks, we did not only consider the excitatory neurons to have stimulus tuning properties but included inhibitory tuning as well. The probability of an excitatory neuron to be driven by one particular stimulus was 5%, leading to roughly 200 neurons that responded specifically to this stimulus. We modeled inhibitory tuning to be both weaker and broader. The probability of an inhibitory neuron to be driven by one particular stimulus was 15%, leading to roughly 150 neurons that responded specifically to this stimulus. There was overlap in stimulus tuning, that is, one neuron could be driven by multiple stimuli. Given this broader tuning of inhibitory neurons compared to excitatory neurons, a single inhibitory neuron could strongly inhibit multiple excitatory neurons which were selective to different stimuli, effectively implementing lateral inhibition.

Stimulus tuning in both populations led to the formation of stimulus-specific excitatory assemblies due to synaptic plasticity, where the subsets of excitatory neurons receiving the same input developed strong connections among each other as noted above (Figure 1—figure supplement 2C) and found experimentally (Ko et al., 2011; Miller et al., 2014; Lee et al., 2016). The strong, bidirectional connectivity among similarly selective neurons in our model was a direct consequence of the triplet STDP rule (Gjorgjieva et al., 2011; Montangie et al., 2020). Additionally, the connections from similarly tuned inhibitory to excitatory neurons also became stronger, as seen in experiments (Lee et al., 2014; Xue et al., 2014; Znamenskiy et al., 2018; Najafi et al., 2020). The number of stimulus-specific assemblies varied depending on the stimulation paradigm and corresponded to the number of unique stimuli presented in a given paradigm. We did not impose topographic organization of these assemblies (for e.g. tonotopy in the auditory cortex) since it would not influence the generation of adapted and novelty responses, but increase model complexity. Such spatial organization could, however, be introduced by allowing the assemblies for neighboring stimuli to overlap.

Disinhibition in the model was implemented via additional inhibiting input to the inhibitory population rinhibI. This was modeled in a purely phenomenological way, and we are agnostic as to what causes the additional inhibition.

Simulation details

The simulations were performed using the Julia programming language. Further evaluation and plotting was done in Python. Euler integration was implemented using a time step of 0.1 ms. Code implementing our model and generating the stimulation protocols can be found here: https://github.com/comp-neural-circuits/novelty-via-inhibitory-plasticity (Schulz, 2021; copy archived at swh:1:rev:d368b14a2368925b290923c2c11411d7b7a40bd1).

Acknowledgements

AS, CM, and JG thank the Max Planck Society for funding and MJB thanks the NEI and the Princeton Accelerator Fund for funding. We thank members of the ‘Computation in Neural Circuits’ group for useful discussions and comments on the manuscript.

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Julijana Gjorgjieva, Email: gjorgjieva@brain.mpg.de.

Maria N Geffen, University of Pennsylvania, United States.

Joshua I Gold, University of Pennsylvania, United States.

Funding Information

This paper was supported by the following grants:

  • Max-Planck-Gesellschaft Research Group Award to JG to Auguste Schulz, Christoph Miehl, Julijana Gjorgjieva.

  • NEI and Princeton Accelerator Fund to Michael J Berry.

Additional information

Competing interests

No competing interests declared.

Author contributions

Conceptualization, Resources, Software, Formal analysis, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing.

Conceptualization, Resources, Software, Formal analysis, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing.

Conceptualization, Methodology, Writing - review and editing.

Conceptualization, Resources, Supervision, Funding acquisition, Methodology, Writing - original draft, Writing - review and editing.

Additional files

Transparent reporting form

Data availability

The code to reproduce the figures for this paper has been uploaded on GitHub and be accessed here: https://github.com/comp-neural-circuits/novelty-via-inhibitory-plasticity (copy archived at https://archive.softwareheritage.org/swh:1:rev:d368b14a2368925b290923c2c11411d7b7a40bd1).

References

  1. Abbott LF. Synaptic depression and cortical gain control. Science. 1997;275:221–224. doi: 10.1126/science.275.5297.221. [DOI] [PubMed] [Google Scholar]
  2. Agnes EJ, Luppi AI, Vogels TP. Complementary inhibitory weight profiles emerge from plasticity and allow flexible switching of receptive fields. The Journal of Neuroscience. 2020;40:9634–9649. doi: 10.1523/JNEUROSCI.0276-20.2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Amit DJ, Brunel N. Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex. 1997;7:237–252. doi: 10.1093/cercor/7.3.237. [DOI] [PubMed] [Google Scholar]
  4. Barlow HB. Possible principles underlying the transformations of sensory messages. Sensory Communication. 2013;1:216–234. doi: 10.7551/mitpress/9780262518420.003.0013. [DOI] [Google Scholar]
  5. Barron HC, Vogels TP, Behrens TE, Ramaswami M. Inhibitory engrams in perception and memory. PNAS. 2017;114:6666–6674. doi: 10.1073/pnas.1701812114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bastos AM, Usrey WM, Adams RA, Mangun GR, Fries P, Friston KJ. Canonical microcircuits for predictive coding. Neuron. 2012;76:695–711. doi: 10.1016/j.neuron.2012.10.038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bi GQ, Poo MM. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. The Journal of Neuroscience. 1998;18:10464–10472. doi: 10.1523/JNEUROSCI.18-24-10464.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of Neurophysiology. 2005;94:3637–3642. doi: 10.1152/jn.00686.2005. [DOI] [PubMed] [Google Scholar]
  9. Brosch M, Schreiner CE. Time course of forward masking tuning curves in cat primary auditory cortex. Journal of Neurophysiology. 1997;77:923–943. doi: 10.1152/jn.1997.77.2.923. [DOI] [PubMed] [Google Scholar]
  10. Chait M. How the brain discovers structure in sound sequences. Acoustical Science and Technology. 2020;41:48–53. doi: 10.1250/ast.41.48. [DOI] [Google Scholar]
  11. Chen IW, Helmchen F, Lütcke H. Specific early and late Oddball-Evoked responses in excitatory and inhibitory neurons of mouse auditory cortex. Journal of Neuroscience. 2015;35:12560–12573. doi: 10.1523/JNEUROSCI.2240-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Clark A. Whatever next? predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences. 2013;36:181–204. doi: 10.1017/S0140525X12000477. [DOI] [PubMed] [Google Scholar]
  13. Cohen-Kashi Malina K, Jubran M, Katz Y, Lampl I. Imbalance between excitation and inhibition in the somatosensory cortex produces postadaptation facilitation. Journal of Neuroscience. 2013;33:8463–8471. doi: 10.1523/JNEUROSCI.4845-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. D'amour JA, Froemke RC. Inhibitory and excitatory spike-timing-dependent plasticity in the auditory cortex. Neuron. 2015;86:514–528. doi: 10.1016/j.neuron.2015.03.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Das S, Sadanandappa MK, Dervan A, Larkin A, Lee JA, Sudhakaran IP, Priya R, Heidari R, Holohan EE, Pimentel A, Gandhi A, Ito K, Sanyal S, Wang JW, Rodrigues V, Ramaswami M. Plasticity of local GABAergic interneurons drives olfactory habituation. PNAS. 2011;108:E646–E654. doi: 10.1073/pnas.1106411108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Debanne D, Inglebert Y, Russier M. Plasticity of intrinsic neuronal excitability. Current Opinion in Neurobiology. 2019;54:73–82. doi: 10.1016/j.conb.2018.09.001. [DOI] [PubMed] [Google Scholar]
  17. Dhruv NT, Carandini M. Cascaded effects of spatial adaptation in the early visual system. Neuron. 2014;81:529–535. doi: 10.1016/j.neuron.2013.11.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Fairhall AL. In: The Cognitive Neurosciences. 5th Edn. Gazzaniga MS, Mangun GR, editors. MIT Press; 2014. Adaptation and natural stimulus statistics; pp. 283–294. [Google Scholar]
  19. Farley BJ, Quirk MC, Doherty JJ, Christian EP. Stimulus-specific adaptation in auditory cortex is an NMDA-independent process distinct from the sensory novelty encoded by the mismatch negativity. Journal of Neuroscience. 2010;30:16475–16484. doi: 10.1523/JNEUROSCI.2793-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Field RE, D'amour JA, Tremblay R, Miehl C, Rudy B, Gjorgjieva J, Froemke RC. Heterosynaptic plasticity determines the set point for cortical Excitatory-Inhibitory balance. Neuron. 2020;106:842–854. doi: 10.1016/j.neuron.2020.03.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Fiete IR, Senn W, Wang CZ, Hahnloser RH. Spike-time-dependent plasticity and heterosynaptic competition organize networks to produce long scale-free sequences of neural activity. Neuron. 2010;65:563–576. doi: 10.1016/j.neuron.2010.02.003. [DOI] [PubMed] [Google Scholar]
  22. Fischer TM, Blazis DE, Priver NA, Carew TJ. Metaplasticity at identified inhibitory synapses in Aplysia. Nature. 1997;389:860–865. doi: 10.1038/39892. [DOI] [PubMed] [Google Scholar]
  23. Fourcaud-Trocmé N, Hansel D, van Vreeswijk C, Brunel N. How spike generation mechanisms determine the neuronal response to fluctuating inputs. The Journal of Neuroscience. 2003;23:11628–11640. doi: 10.1523/JNEUROSCI.23-37-11628.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Friston K. Does predictive coding have a future? Nature Neuroscience. 2018;21:1019–1021. doi: 10.1038/s41593-018-0200-7. [DOI] [PubMed] [Google Scholar]
  25. Froemke RC, Merzenich MM, Schreiner CE. A synaptic memory trace for cortical receptive field plasticity. Nature. 2007;450:425–429. doi: 10.1038/nature06289. [DOI] [PubMed] [Google Scholar]
  26. Froemke RC. Plasticity of cortical excitatory-inhibitory balance. Annual Review of Neuroscience. 2015;38:195–219. doi: 10.1146/annurev-neuro-071714-034002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Garrett M, Manavi S, Roll K, Ollerenshaw DR, Groblewski PA, Ponvert ND, Kiggins JT, Casal L, Mace K, Williford A, Leon A, Jia X, Ledochowitsch P, Buice MA, Wakeman W, Mihalas S, Olsen SR. Experience shapes activity dynamics and stimulus coding of VIP inhibitory cells. eLife. 2020;9:e50340. doi: 10.7554/eLife.50340. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Geffen MN, de Vries SE, Meister M. Retinal ganglion cells can rapidly change polarity from off to on. PLOS Biology. 2007;5:e65. doi: 10.1371/journal.pbio.0050065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Gjorgjieva J, Clopath C, Audet J, Pfister JP. A triplet spike-timing-dependent plasticity model generalizes the Bienenstock-Cooper-Munro rule to higher-order spatiotemporal correlations. PNAS. 2011;108:19383–19388. doi: 10.1073/pnas.1105933108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Glanzman DL. Olfactory habituation: fresh insights from flies. PNAS. 2011;108:14711–14712. doi: 10.1073/pnas.1111230108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Griffen TC, Maffei A. GABAergic synapses: their plasticity and role in sensory cortex. Frontiers in Cellular Neuroscience. 2014;8:91. doi: 10.3389/fncel.2014.00091. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Haak KV, Fast E, Bao M, Lee M, Engel SA. Four days of visual contrast deprivation reveals limits of neuronal adaptation. Current Biology. 2014;24:2575–2579. doi: 10.1016/j.cub.2014.09.027. [DOI] [PubMed] [Google Scholar]
  33. Haas JS, Nowotny T, Abarbanel HD. Spike-timing-dependent plasticity of inhibitory synapses in the entorhinal cortex. Journal of Neurophysiology. 2006;96:3305–3313. doi: 10.1152/jn.00551.2006. [DOI] [PubMed] [Google Scholar]
  34. Hamm JP, Peterka DS, Gogos JA, Yuste R. Altered cortical ensembles in mouse models of schizophrenia. Neuron. 2017;94:153–167. doi: 10.1016/j.neuron.2017.03.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Hamm JP, Yuste R. Somatostatin interneurons control a key component of mismatch negativity in mouse visual cortex. Cell Reports. 2016;16:597–604. doi: 10.1016/j.celrep.2016.06.037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Harms L, Michie PT, Näätänen R. Criteria for determining whether mismatch responses exist in animal models: focus on rodents. Biological Psychology. 2016;116:28–35. doi: 10.1016/j.biopsycho.2015.07.006. [DOI] [PubMed] [Google Scholar]
  37. Hattori R, Kuchibhotla KV, Froemke RC, Komiyama T. Functions and dysfunctions of neocortical inhibitory neuron subtypes. Nature Neuroscience. 2017;20:1199–1208. doi: 10.1038/nn.4619. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Heintz TG, Hinojosa AJ, Lagnado L. Opposing forms of adaptation in mouse visual cortex are controlled by distinct inhibitory microcircuits and gated by locomotion. bioRxiv. 2020 doi: 10.1101/2020.01.16.909788. [DOI] [PMC free article] [PubMed]
  39. Hennequin G, Agnes EJ, Vogels TP. Inhibitory plasticity: balance, control, and codependence. Annual Review of Neuroscience. 2017;40:557–579. doi: 10.1146/annurev-neuro-072116-031005. [DOI] [PubMed] [Google Scholar]
  40. Hershenhoren I, Taaseh N, Antunes FM, Nelken I. Intracellular correlates of stimulus-specific adaptation. Journal of Neuroscience. 2014;34:3303–3319. doi: 10.1523/JNEUROSCI.2166-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Holmgren CD, Zilberter Y. Coincident spiking activity induces long-term changes in inhibition of neocortical pyramidal cells. The Journal of Neuroscience. 2001;21:8270–8277. doi: 10.1523/JNEUROSCI.21-20-08270.2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Homann J, Koay SA, Glidden AM, Tank DW, Berry II MJ. Predictive coding of novel versus familiar stimuli in the primary visual cortex. bioRxiv. 2017 doi: 10.1101/197608. [DOI]
  43. Kato HK, Gillet SN, Isaacson JS. Flexible sensory representations in auditory cortex driven by behavioral relevance. Neuron. 2015;88:1027–1039. doi: 10.1016/j.neuron.2015.10.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Keller GB, Bonhoeffer T, Hübener M. Sensorimotor mismatch signals in primary visual cortex of the behaving mouse. Neuron. 2012;74:809–815. doi: 10.1016/j.neuron.2012.03.040. [DOI] [PubMed] [Google Scholar]
  45. Keller AJ, Houlton R, Kampa BM, Lesica NA, Mrsic-Flogel TD, Keller GB, Helmchen F. Stimulus relevance modulates contrast adaptation in visual cortex. eLife. 2017;6:e21589. doi: 10.7554/eLife.21589. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Khouri L, Nelken I. Detecting the unexpected. Current Opinion in Neurobiology. 2015;35:142–147. doi: 10.1016/j.conb.2015.08.003. [DOI] [PubMed] [Google Scholar]
  47. King JL, Lowe MP, Stover KR, Wong AA, Crowder NA. Adaptive processes in thalamus and cortex revealed by silencing of primary visual cortex during contrast adaptation. Current Biology. 2016;26:1295–1300. doi: 10.1016/j.cub.2016.03.018. [DOI] [PubMed] [Google Scholar]
  48. Kleberg FI, Fukai T, Gilson M. Excitatory and inhibitory STDP jointly tune feedforward neural circuits to selectively propagate correlated spiking activity. Frontiers in Computational Neuroscience. 2014;8:53. doi: 10.3389/fncom.2014.00053. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Ko H, Hofer SB, Pichler B, Buchanan KA, Sjöström PJ, Mrsic-Flogel TD. Functional specificity of local synaptic connections in neocortical networks. Nature. 2011;473:87–91. doi: 10.1038/nature09880. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Ko H, Cossell L, Baragli C, Antolik J, Clopath C, Hofer SB, Mrsic-Flogel TD. The emergence of functional microcircuits in visual cortex. Nature. 2013;496:96–100. doi: 10.1038/nature12015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Kuhlman SJ, Olivas ND, Tring E, Ikrar T, Xu X, Trachtenberg JT. A disinhibitory microcircuit initiates critical-period plasticity in the visual cortex. Nature. 2013;501:543–546. doi: 10.1038/nature12485. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Latimer KW, Barbera D, Sokoletsky M, Awwad B, Katz Y, Nelken I, Lampl I, Fairhall AL, Priebe NJ. Multiple timescales account for adaptive responses across sensory cortices. The Journal of Neuroscience. 2019;39:10019–10033. doi: 10.1523/JNEUROSCI.1642-19.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Lee SH, Marchionni I, Bezaire M, Varga C, Danielson N, Lovett-Barron M, Losonczy A, Soltesz I. Parvalbumin-positive basket cells differentiate among hippocampal pyramidal cells. Neuron. 2014;82:1129–1144. doi: 10.1016/j.neuron.2014.03.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Lee WC, Bonin V, Reed M, Graham BJ, Hood G, Glattfelder K, Reid RC. Anatomy and function of an excitatory network in the visual cortex. Nature. 2016;532:370–374. doi: 10.1038/nature17192. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Letzkus JJ, Wolff SB, Meyer EM, Tovote P, Courtin J, Herry C, Lüthi A. A disinhibitory microcircuit for associative fear learning in the auditory cortex. Nature. 2011;480:331–335. doi: 10.1038/nature10674. [DOI] [PubMed] [Google Scholar]
  56. Levakova M, Kostal L, Monsempès C, Lucas P, Kobayashi R. Adaptive integrate-and-fire model reproduces the dynamics of olfactory receptor neuron responses in a moth. Journal of the Royal Society Interface. 2019;16:20190246. doi: 10.1098/rsif.2019.0246. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Litwin-Kumar A, Doiron B. Formation and maintenance of neuronal assemblies through synaptic plasticity. Nature Communications. 2014;5:5319. doi: 10.1038/ncomms6319. [DOI] [PubMed] [Google Scholar]
  58. Lundstrom BN, Fairhall AL, Maravall M. Multiple timescale encoding of slowly varying whisker stimulus envelope in cortical and thalamic neurons in vivo. Journal of Neuroscience. 2010;30:5071–5077. doi: 10.1523/JNEUROSCI.2193-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Luz Y, Shamir M. Balancing feed-forward excitation and inhibition via hebbian inhibitory synaptic plasticity. PLOS Computational Biology. 2012;8:e1002334. doi: 10.1371/journal.pcbi.1002334. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Ma WP, Liu BH, Li YT, Huang ZJ, Zhang LI, Tao HW. Visual representations by cortical somatostatin inhibitory neurons - Selective but with weak and delayed responses. Journal of Neuroscience. 2010;30:14371–14379. doi: 10.1523/JNEUROSCI.3248-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Mackwood O, Naumann LB, Sprekeler H. Learning excitatory-inhibitory neuronal assemblies in recurrent networks. eLife. 2021;10:e59715. doi: 10.7554/eLife.59715. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Maffei A, Nataraj K, Nelson SB, Turrigiano GG. Potentiation of cortical inhibition by visual deprivation. Nature. 2006;443:81–84. doi: 10.1038/nature05079. [DOI] [PubMed] [Google Scholar]
  63. Makino H, Komiyama T. Learning enhances the relative impact of top-down processing in the visual cortex. Nature Neuroscience. 2015;18:1116–1122. doi: 10.1038/nn.4061. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Mehra M, Mukesh A, Bandyopadhyay S. Separate functional subnetworks of excitatory neurons show preference to periodic and random sound structures. bioRxiv. 2021 doi: 10.1101/2021.02.13.431077. [DOI] [PMC free article] [PubMed]
  65. Mill R, Coath M, Wennekers T, Denham SL. A neurocomputational model of stimulus-specific adaptation to oddball and Markov sequences. PLOS Computational Biology. 2011a;7:e1002117. doi: 10.1371/journal.pcbi.1002117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Mill R, Coath M, Wennekers T, Denham SL. Abstract stimulus-specific adaptation models. Neural Computation. 2011b;23:435–476. doi: 10.1162/NECO_a_00077. [DOI] [PubMed] [Google Scholar]
  67. Mill R, Coath M, Wennekers T, Denham SL. Characterising stimulus-specific adaptation using a multi-layer field model. Brain Research. 2012;1434:178–188. doi: 10.1016/j.brainres.2011.08.063. [DOI] [PubMed] [Google Scholar]
  68. Miller JE, Ayzenshtat I, Carrillo-Reid L, Yuste R. Visual stimuli recruit intrinsically generated cortical ensembles. PNAS. 2014;111:E4053–E4061. doi: 10.1073/pnas.1406077111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Montangie L, Miehl C, Gjorgjieva J. Autonomous emergence of connectivity assemblies via spike triplet interactions. PLOS Computational Biology. 2020;16:e1007835. doi: 10.1371/journal.pcbi.1007835. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Movshon JA, Lennie P. Pattern-selective adaptation in visual cortical neurones. Nature. 1979;278:850–852. doi: 10.1038/278850a0. [DOI] [PubMed] [Google Scholar]
  71. Näätänen R, Simpson M, Loveless NE. Stimulus deviance and evoked potentials. Biological Psychology. 1982;14:53–98. doi: 10.1016/0301-0511(82)90017-5. [DOI] [PubMed] [Google Scholar]
  72. Näätänen R, Paavilainen P, Rinne T, Alho K. The mismatch negativity (MMN) in basic research of central auditory processing: a review. Clinical Neurophysiology. 2007;118:2544–2590. doi: 10.1016/j.clinph.2007.04.026. [DOI] [PubMed] [Google Scholar]
  73. Najafi F, Elsayed GF, Cao R, Pnevmatikakis E, Latham PE, Cunningham JP, Churchland AK. Excitatory and inhibitory subnetworks are equally selective during Decision-Making and emerge simultaneously during learning. Neuron. 2020;105:165–179. doi: 10.1016/j.neuron.2019.09.045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Natan RG, Briguglio JJ, Mwilambwe-Tshilobo L, Jones SI, Aizenberg M, Goldberg EM, Geffen MN. Complementary control of sensory adaptation by two types of cortical interneurons. eLife. 2015;4:e09868. doi: 10.7554/eLife.09868. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Natan RG, Rao W, Geffen MN. Cortical interneurons differentially shape frequency tuning following adaptation. Cell Reports. 2017;21:878–890. doi: 10.1016/j.celrep.2017.10.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Nelken I. Stimulus-specific adaptation and deviance detection in the auditory system: experiments and models. Biological Cybernetics. 2014;108:655–663. doi: 10.1007/s00422-014-0585-7. [DOI] [PubMed] [Google Scholar]
  77. Ohki K, Reid RC. Specificity and randomness in the visual cortex. Current Opinion in Neurobiology. 2007;17:401–407. doi: 10.1016/j.conb.2007.07.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Park Y, Geffen MN. A circuit model of auditory cortex. PLOS Computational Biology. 2020;16:e1008016. doi: 10.1371/journal.pcbi.1008016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Pfister JP, Gerstner W. Triplets of spikes in a model of spike timing-dependent plasticity. Journal of Neuroscience. 2006;26:9673–9682. doi: 10.1523/JNEUROSCI.1425-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Phillips EAK, Schreiner CE, Hasenstaub AR. Cortical interneurons differentially regulate the effects of acoustic context. Cell Reports. 2017;20:771–778. doi: 10.1016/j.celrep.2017.07.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Ramaswami M. Network plasticity in adaptive filtering and behavioral habituation. Neuron. 2014;82:1216–1229. doi: 10.1016/j.neuron.2014.04.035. [DOI] [PubMed] [Google Scholar]
  82. Rao RP, Ballard DH. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience. 1999;2:79–87. doi: 10.1038/4580. [DOI] [PubMed] [Google Scholar]
  83. Ross JM, Hamm JP. Cortical microcircuit mechanisms of mismatch negativity and its underlying subcomponents. Frontiers in Neural Circuits. 2020;14:13. doi: 10.3389/fncir.2020.00013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Rost T, Deger M, Nawrot MP. Winnerless competition in clustered balanced networks: inhibitory assemblies do the trick. Biological Cybernetics. 2018;112:81–98. doi: 10.1007/s00422-017-0737-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Rostami V, Rost T, Riehle A, Albada SJv, Nawrot MP. Spiking neural network model of motor cortex with joint excitatory and inhibitory clusters reflects task uncertainty, reaction times, and variability dynamics. bioRxiv. 2020 doi: 10.1101/2020.02.27.968339. [DOI]
  86. Schulz A. novelty-via-inhibitory-plasticity. swh:1:rev:d368b14a2368925b290923c2c11411d7b7a40bd1GitHub. 2021 https://archive.softwareheritage.org/swh:1:dir:25354235d9002a4f0b922bf5226d49d3eec097e4;origin=https://github.com/comp-neural-circuits/novelty-via-inhibitory-plasticity;visit=swh:1:snp:002d92b4645ffdcbcc332fb6e21f7c9a09030095;anchor=swh:1:rev:d368b14a2368925b290923c2c11411d7b7a40bd1
  87. Schwartz G, Harris R, Shrom D, Berry MJ. Detection and prediction of periodic patterns by the retina. Nature Neuroscience. 2007;10:552–554. doi: 10.1038/nn1887. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Schwartz G, Berry MJ. Sophisticated temporal pattern recognition in retinal ganglion cells. Journal of Neurophysiology. 2008;99:1787–1798. doi: 10.1152/jn.01025.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Seay MJ, Natan RG, Geffen MN, Buonomano DV. Differential Short-Term plasticity of PV and SST neurons accounts for adaptation and facilitation of cortical neurons to auditory tones. The Journal of Neuroscience. 2020;40:9224–9235. doi: 10.1523/JNEUROSCI.0686-20.2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Simoncelli EP, Olshausen BA. Natural image statistics and neural representation. Annual Review of Neuroscience. 2001;24:1193–1216. doi: 10.1146/annurev.neuro.24.1.1193. [DOI] [PubMed] [Google Scholar]
  91. Sjöström PJ, Turrigiano GG, Nelson SB. Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron. 2001;32:1149–1164. doi: 10.1016/S0896-6273(01)00542-6. [DOI] [PubMed] [Google Scholar]
  92. Snow M, Coen-Cagli R, Schwartz O. Adaptation in the visual cortex: a case for probing neuronal populations with natural stimuli. F1000Research. 2017;6:1246. doi: 10.12688/f1000research.11154.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Spratling MW. A review of predictive coding algorithms. Brain and Cognition. 2017;112:92–97. doi: 10.1016/j.bandc.2015.11.003. [DOI] [PubMed] [Google Scholar]
  94. Sprekeler H. Functional consequences of inhibitory plasticity: homeostasis, the excitation-inhibition balance and beyond. Current Opinion in Neurobiology. 2017;43:198–203. doi: 10.1016/j.conb.2017.03.014. [DOI] [PubMed] [Google Scholar]
  95. Sussman ES, Chen S, Sussman-Fort J, Dinces E. The five myths of MMN: redefining how to use MMN in basic and clinical research. Brain Topography. 2014;27:553–564. doi: 10.1007/s10548-013-0326-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Taaseh N, Yaron A, Nelken I. Stimulus-specific adaptation and deviance detection in the rat auditory cortex. PLOS ONE. 2011;6:e23369. doi: 10.1371/journal.pone.0023369. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Thompson A, Gribizis A, Chen C, Crair MC. Activity-dependent development of visual receptive fields. Current Opinion in Neurobiology. 2017;42:136–143. doi: 10.1016/j.conb.2016.12.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Tikhonravov D, Neuvonen T, Pertovaara A, Savioja K, Ruusuvirta T, Näätänen R, Carlson S. Effects of an NMDA-receptor antagonist MK-801 on an MMN-like response recorded in anesthetized rats. Brain Research. 2008;1203:97–102. doi: 10.1016/j.brainres.2008.02.006. [DOI] [PubMed] [Google Scholar]
  99. Tremblay R, Lee S, Rudy B. GABAergic interneurons in the neocortex: from cellular properties to circuits. Neuron. 2016;91:260–292. doi: 10.1016/j.neuron.2016.06.033. [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Tsodyks M, Pawelzik K, Markram H. Neural networks with dynamic synapses. Neural Computation. 1998;10:821–835. doi: 10.1162/089976698300017502. [DOI] [PubMed] [Google Scholar]
  101. Udakis M, Pedrosa V, Chamberlain SEL, Clopath C, Mellor JR. Interneuron-specific plasticity at Parvalbumin and somatostatin inhibitory synapses onto CA1 pyramidal neurons shapes hippocampal output. Nature Communications. 2020;11:4395. doi: 10.1038/s41467-020-18074-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Ulanovsky N, Las L, Nelken I. Processing of low-probability sounds by cortical neurons. Nature Neuroscience. 2003;6:391–398. doi: 10.1038/nn1032. [DOI] [PubMed] [Google Scholar]
  103. Ulanovsky N, Las L, Farkas D, Nelken I. Multiple time scales of adaptation in auditory cortex neurons. Journal of Neuroscience. 2004;24:10440–10453. doi: 10.1523/JNEUROSCI.1905-04.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. Vinken K, Vogels R, Op de Beeck H. Recent visual experience shapes visual processing in rats through Stimulus-Specific adaptation and response enhancement. Current Biology. 2017;27:914–919. doi: 10.1016/j.cub.2017.02.024. [DOI] [PubMed] [Google Scholar]
  105. Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science. 2011;334:1569–1573. doi: 10.1126/science.1211095. [DOI] [PubMed] [Google Scholar]
  106. Wang L, Maffei A. Inhibitory plasticity dictates the sign of plasticity at excitatory synapses. Journal of Neuroscience. 2014;34:1083–1093. doi: 10.1523/JNEUROSCI.4711-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Wang XJ, Yang GR. A disinhibitory circuit motif and flexible information routing in the brain. Current Opinion in Neurobiology. 2018;49:75–83. doi: 10.1016/j.conb.2018.01.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Weber AI, Krishnamurthy K, Fairhall AL. Coding principles in adaptation. Annual Review of Vision Science. 2019;5:427–449. doi: 10.1146/annurev-vision-091718-014818. [DOI] [PubMed] [Google Scholar]
  109. Woodin MA, Ganguly K, Poo MM. Coincident pre- and postsynaptic activity modifies GABAergic synapses by postsynaptic changes in Cl- transporter activity. Neuron. 2003;39:807–820. doi: 10.1016/S0896-6273(03)00507-5. [DOI] [PubMed] [Google Scholar]
  110. Wu YK, Hengen KB, Turrigiano GG, Gjorgjieva J. Homeostatic mechanisms regulate distinct aspects of cortical circuit dynamics. PNAS. 2020;117:24514–24525. doi: 10.1073/pnas.1918368117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  111. Xue M, Atallah BV, Scanziani M. Equalizing excitation-inhibition ratios across visual cortical neurons. Nature. 2014;511:596–600. doi: 10.1038/nature13321. [DOI] [PMC free article] [PubMed] [Google Scholar]
  112. Yarden TS, Nelken I. Stimulus-specific adaptation in a recurrent network model of primary auditory cortex. PLOS Computational Biology. 2017;13:e1005437. doi: 10.1371/journal.pcbi.1005437. [DOI] [PMC free article] [PubMed] [Google Scholar]
  113. Yaron A, Hershenhoren I, Nelken I. Sensitivity to complex statistical regularities in rat auditory cortex. Neuron. 2012;76:603–615. doi: 10.1016/j.neuron.2012.08.025. [DOI] [PubMed] [Google Scholar]
  114. Yaron A, Jankowski MM, Badrieh R, Nelken I. Stimulus-specific adaptation to behaviorally-relevant sounds in awake rats. PLOS ONE. 2020;15:e0221541. doi: 10.1371/journal.pone.0221541. [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Zenke F, Agnes EJ, Gerstner W. Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nature Communications. 2015;6:6922. doi: 10.1038/ncomms7922. [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Zenke F, Gerstner W, Ganguli S. The temporal paradox of hebbian learning and homeostatic plasticity. Current Opinion in Neurobiology. 2017;43:166–176. doi: 10.1016/j.conb.2017.03.015. [DOI] [PubMed] [Google Scholar]
  117. Zmarz P, Keller GB. Mismatch receptive fields in mouse visual cortex. Neuron. 2016;92:766–772. doi: 10.1016/j.neuron.2016.09.057. [DOI] [PubMed] [Google Scholar]
  118. Znamenskiy P, Kim M-h, Muir DR, Iacaruso F, Hofer SB, Mrsic-Flogel TD. Functional selectivity and specific connectivity of inhibitory neurons in primary visual cortex. bioRxiv. 2018 doi: 10.1101/294835. [DOI]
  119. Zucker RS, Regehr WG. Short-Term Synaptic Plasticity. Annual Review of Physiology. 2002;64:355–405. doi: 10.1146/annurev.physiol.64.092501.114547. [DOI] [PubMed] [Google Scholar]

Decision letter

Editor: Maria N Geffen1
Reviewed by: Maria N Geffen2

Our editorial process produces two outputs: (i) public reviews designed to be posted alongside the preprint for the benefit of readers; (ii) feedback on the manuscript for the authors, including requests for revisions, shown below. We also include an acceptance summary that explains what the editors found interesting or important about the work.

Acceptance summary:

Your paper identifies an important mechanism for generation of novelty signals. It expands current understanding of the functional role of inhibitory plasticity, and makes predictions that can be tested experimentally in future studies.

Decision letter after peer review:

Thank you for submitting your article "The generation of cortical novelty responses through inhibitory plasticity" for consideration by eLife. Your article has been reviewed by 3 peer reviewers, including Maria N Geffen as Reviewing Editor and Reviewer #1, and the evaluation has been overseen by Joshua Gold as the Senior Editor.

The reviewers have discussed their reviews with one another, and the Reviewing Editor has drafted this to help you prepare a revised submission.

Essential revisions:

1) Please revise the paper to address the specific points raised by all reviewers.

2) Addressing points 1, 3, 4, 5 and 7 by reviewer 1 and points 1, 2, 3 and 5 by reviewer 3 will likely require additional evidence and discussion.

Reviewer #1 (Recommendations for the authors):

This is an interesting and well-written paper which presents the results of a model for cortical plasticity and resulting increase in neuronal responses to unexpected stimuli. Overall, this is an elegant study that provides a number of interesting, experimentally testable, hypotheses and develops a prediction for a mechanism for novelty response generation. The results are clearly presented. We have several suggestions that should allow to better integrate the study with known experimental results.

1. Whereas the authors use the term "novelty" response, it is unclear whether this is a true novelty response because a number of stimulus parameters differs from those identified in the literature. The network is not sensitive to temporal structure, which suggests that it does not completely replicate certain aspects of neuronal adaptation in cortex. MEG studies in humans (work from Maria Chait's lab), and Yaron et al., 2012 (SSA under different contexts in rats) all suggest that cortex should exhibit a differential response to different sequences of stimuli drawn from the same distribution. What modifications to the model would produce adaptation to temporal structure of the stimuli?

2. Figure 2 provides support for a key result of Homann et al., 2017. It would be interesting to see if the network behaves similarly with more complex stimuli and generates novelty responses to complex stimuli. In Homann et al., 2017 experiment, the stimuli were scenes comprised of many Gabor patches. These could potentially broadly activate visual cortex as opposed to the "simple" single stimuli used here. It would be interesting to consider how a broader pattern of stimulation would affect the results.

3. In the experimental results, a large majority of individual neurons exhibited the novelty response. Does this also occur in the model? Would it be possible to present statistics and single-neuronal data in addition to the population mean responses?

4. It is unclear how duration of pre-training relates to plasticity and novelty responses (Figure 4B). Would you expect increased pretraining duration affect the novelty response behavior? If so, perhaps the duration of pre-training should be varied systematically? Furthermore, how does the number of pre-training stimuli affect the performance?

5. It is difficult to interpret the differences in adaptation between different repeated stimuli (Figure 4C bottom -> green vs blue lines) as compared to the differences between the "late" and "intermediate" time points (4D bottom). Results depicted in figure 4D suggest multiple timescales of adaptation. Are the differences in 4C, which seem to be of similar magnitude, meaningful?

6. One of the primary findings of Natan et al., 2015 was the differential effect of optogenetic disinhibition of SOM vs PV interneurons on the SSA response. The differential effect is seen when disinhibiting the standards -1, +1, +2 etc. relative to the deviant. It would be interesting to see what the effect of disinhibition is on the network response (Figure 6), and how it relates to Natan's findings.

7. The motivation for having different STDP learning rules for E->E and I->E connections is unclear, other than citation of previous work. Can you please explore in more detail the differences between these learning rules?

8. An emerging theme from this paper and from other models such as Yarden and Nelken, 2017 and Park and Geffen, 2020, may be that the tonotopic organization of similarly tuned neurons helps facilitate adaptation. Tuned assemblies were a key feature of the models in these papers. Here, the tonotopic organization arises from the STDP rule here, and it would be interesting to discuss the relationship between tonotopic organization and plasticity.

9. Seay et al., 2020, have considered plasticity rules in adaptation as a function of facilitation of specific inhibitory interneurons in SSA. It would be interesting to speculate whether and how this model and the learning rules relates to those published results.

Reviewer #2 (Recommendations for the authors):

The units of the learning rate n of inhibitory plasticity on page 18 is listed in uF and only very briefly discussed? Since eta seems to be relatively important for the novelty detection, and determines e.g. how many trials a system will typically require to learn a stimulus (see also Figure 2) it should thus be discussed in more detail, possibly with an additional set of simulations that are complementary to those in Figure 2.

The inhibitory learning rule used here is stated to be confirmed on p. 15. This is arguable, and the jury may still be out. It may be worth discussing how other rules would perform?

Reviewer #3 (Recommendations for the authors):

1. The abstract is a bit cryptic: it states that "inhibitory synaptic plasticity readily generates novelty responses". Inhibitory plasticity is rather vague, it is necessary to read quite far into the results to figure out if the key is short-term synaptic inhibitory plasticity as in a number of models, or some form of long-term associative synaptic plasticity. The abstract should be more informative and explicitly state that the model relies on STDP of Inh->Ex synapse.

2. It would be helpful to provide a bit more information about the model architecture in Figure 1 and the start of the Results section so the reader does not immediately have to go to the Methods section (e.g., network size, ratio of E/I neurons, no STP).

3. In the Methods it would also be helpful to plot the STDP function for excitatory and inhibitory plasticity for spike pairs.

eLife. 2021 Oct 14;10:e65309. doi: 10.7554/eLife.65309.sa2

Author response


Reviewer #1 (Recommendations for the authors):

This is an interesting and well-written paper which presents the results of a model for cortical plasticity and resulting increase in neuronal responses to unexpected stimuli. Overall, this is an elegant study that provides a number of interesting, experimentally testable, hypotheses and develops a prediction for a mechanism for novelty response generation. The results are clearly presented. We have several suggestions that should allow to better integrate the study with known experimental results.

We thank the reviewer for her detailed and helpful feedback. We address the raised points in detail below. In short, we added four new supplementary figures related to the reviewer's comments (Figure 1—figure supplement 1,3,4 and Figure 4—figure supplement 1) and substantially rewrote multiple parts of our manuscript.

1. Whereas the authors use the term "novelty" response, it is unclear whether this is a true novelty response because a number of stimulus parameters differs from those identified in the literature. The network is not sensitive to temporal structure, which suggests that it does not completely replicate certain aspects of neuronal adaptation in cortex. MEG studies in humans (work from Maria Chait's lab), and Yaron et al., 2012 (SSA under different contexts in rats) all suggest that cortex should exhibit a differential response to different sequences of stimuli drawn from the same distribution. What modifications to the model would produce adaptation to temporal structure of the stimuli?

To produce adaptation to the exact temporal structure of stimuli in a sequence, our model will most likely need the addition of short-term plasticity.

As the reviewer points out, in some studies [Yaron et al., 2012, Chait, 2020] a novelty response is defined to be sensitive to the stimulus temporal structure. In our work, we use the term `temporal structure' to refer to the periodic structure of the sequence (i.e. a periodic sequence, e.g. ABCABCABC, vs. a non-periodic sequence, e.g. ACBBACBCA). The term `temporal structure' can also include the duration of a stimulus and the interval between stimuli (see also our answer to comment 2 by reviewer 3). To distinguish between these different possibilities, we now use `sequence structure' instead of `temporal structure' to refer to the periodic structure of the sequence. In Figure 3E, we show that the generation of a novelty response does not depend on having a periodic structure of the sequence. A novelty response still occurs when the stimuli in a sequence are shuffled (compare Figure 3E with Figure 3A). This is similar to the findings of [Yaron et al., 2012], where following the presentation of repeated stimuli in a sequence, an elevated response to a novel stimulus emerges independent of whether the repeated stimuli are presented periodically or randomly (although it is somewhat higher in the case of random stimulus presentation). Therefore, we conclude that a periodic sequence is not necessary for the occurrence of a novelty response.

Interestingly, [Yaron et al., 2012] also found a small, but statistically significant, reduction in the adapted response to repeated stimuli when the stimuli were presented periodically rather than randomly in a sequence. This suggests that there might be an additional adaptation to the presentation of periodic stimuli because these stimuli are better predictable. In contrast, a recent preprint found the opposite: higher responses to repeated stimuli when they were presented periodically rather than randomly [Mehra et al., 2021]. Our current model cannot capture the small differences found in [Yaron et al., 2012] because the responses to repeated and novel stimuli depend only on the statistics of the presented stimuli, and not on the exact temporal structure (periodic vs. random). One plausible way to capture the difference between periodic and random stimulus presentation might be to add short-term plasticity to our model. Several computational studies have already shown that short-term plasticity can generate history dependence in the response to a stimulus (see e.g. [Seay et al., 2020, Phillips et al., 2017]). Periodic stimulus presentation could lead to more short-term depression compared to a random presentation of repeated stimuli and therefore lead to a small reduction of average responses. However, combining short-term plasticity and long-term plasticity in recurrent circuits of excitatory and inhibitory neurons is currently beyond the scope of our work. MEG signals recorded in humans, in the context of mismatch-negativity (MMN) are also sensitive to the temporal structure in sound sequences, but to explain these findings more complex models on different scales are needed (see e.g. [Chait, 2020]).

We now define the term `novelty response' in line 44. In lines 502 and 640 in the Discussion we discuss the difference in responses to periodically vs. randomly presented stimuli in a sequence. In line 654 we acknowledge that further studies including both short- and long-term plasticity are needed to fully uncover all aspects of adapted and novelty responses. We also changed the section title to "Stimulus periodicity in the sequence is not required for the generation of a novelty response" and clarified the notion of `sequence structure' throughout the section.

2. Figure 2 provides support for a key result of Homann et al., 2017. It would be interesting to see if the network behaves similarly with more complex stimuli and generates novelty responses to complex stimuli. In Homann et al., 2017 experiment, the stimuli were scenes comprised of many Gabor patches. These could potentially broadly activate visual cortex as opposed to the "simple" single stimuli used here. It would be interesting to consider how a broader pattern of stimulation would affect the results.

More complex stimuli also generate adaptation to repeated stimuli and produce a novelty response to a novel stimulus (Figure 1—figure supplement 3), similar to the `simple' stimuli used in Figure 1-4.

First, we note that in our model, rather than explicitly modeling oriented and spatially-localized Gabor patches, stimuli that correspond to Gabors of a given orientation simultaneously co-activate subsets of neurons by strongly driving them. Hence, the model analog of the presentation of a sensory stimulus in the experiments is increased input to a subset of neurons. Presenting these stimuli in the pretraining phase leads to the imprinting of structure in the recurrent network where neurons which are selective to similar features (e.g. a similar orientation) become strongly connected due to synaptic plasticity (as seen in experiments, see for e.g. [Ko et al., 2011, Ko et al., 2013]). This can be interpreted as imprinting a `backbone' of orientation selective neurons to simulate development, with strong recurrent connections among neurons that share selectivity. Therefore, a stimulus in our pretraining stimulation paradigm is functionally similar to presenting Gabor patches with similar orientations, but does not represent the Gabor patches themselves.

Next, to answer the reviewer's comment, we modified our model to test how it responds to more complex stimuli that correspond to the experiment in [Homann et al., 2017]. As before, we initially simulated development by presenting a set of stimuli during the pretraining phase to generate the `backbone' of orientation selective neurons (Figure 1—figure supplement 3A). In the [Homann et al., 2017] experiment, the visual stimuli consist of 100 randomly oriented Gabor patches. To match these randomly oriented Gabor patches, in the stimulation phase we activated different, randomly chosen, subsets of neurons (note that there is some overlap with the developmentally imprinted subsets). The weight matrix before the stimulation phase does not show any assemblies specific to repeated or novelty stimuli, compared to the weight matrix after the stimulation phase (Figure 1—figure supplement 3E,F, compare left corresponding to before the stimulation phase and right corresponding to after the stimulation phase). We confirmed that these more complex stimuli generate similar results as the `simpler' stimuli used in Figure 1-4. In particular, the more complex stimuli also generate adaptation to repeated stimuli and produce a novelty response to a novel stimulus (Figure 1—figure supplement 3B). As before, the observed results come from an increase of the inhibitory synaptic weights (Figure 1-Figure Supplement 3D, bottom row). In general, the evolution of synaptic weights does not differ qualitatively after the pretraining phase (Figure 1—figure supplement 3C,D, compare with Figure 4B-D).

We present results with these more complex stimuli in a new figure (Figure 1—figure supplement 3) and discuss them in our Results section in lines 151 and 277, the Methods section in line 757 and 776 and the new Discussion section `Robustness of the model' in line 533.

3. In the experimental results, a large majority of individual neurons exhibited the novelty response. Does this also occur in the model? Would it be possible to present statistics and single-neuronal data in addition to the population mean responses?

In the model, the fraction of active excitatory neurons is qualitatively similar for novel and for the adapted and onset stimuli (Figure 1—figure supplement 4), in contrast to the findings of [Homann et al., 2017] (see Justification below). However, the fraction of active neurons increases when a disinhibitory signal is applied (Figure 7C, E).

Homann et al., 2017 found that a repeated stimulus sparsely activates neurons in V1, referred to as a low density or a sparse response, whereas a novel stimulus evokes excess activity in a much larger fraction of the neurons in V1, referred to as a high density or a dense response [Homann et al., 2017]. To quantify the density of the adapted and novel responses in the model, we used the fraction of active neurons as a measure of the density of a response. We call active neurons those that spike at least once within a 100 ms window directly after the onset of the stimulus (onset), after the onset of the novel stimulus (novelty) and shortly before the novel stimulus is presented (adapted). In the model, the fraction of active excitatory neurons is qualitatively similar or even smaller for novel than for the adapted and onset stimuli (Figure 1- Figure Supplement 4A). Therefore, our model does not capture the dense novelty response as described in [Homann et al., 2017]. Why does this happen? Upon the presentation of a novel stimulus, inhibitory plasticity does not sufficiently increase inhibitory input into excitatory neurons to counteract the excess excitatory input. As a result, the high firing rates of excitatory neurons that are tuned to the novel stimulus strongly drive the entire inhibitory population. This in turn increases inhibition onto the entire excitatory population, including neurons not tuned to the novel stimulus. Since the firing rates of the excitatory neurons that are tuned to the adapted stimulus are lower (compared to the firing rates when presenting a novel stimulus), the adapted stimulus drives the entire inhibitory population less strongly reducing inhibition onto the entire excitatory population. Hence, the fraction of active excitatory neurons in the whole network is lower when presenting a novel stimulus compared to presenting an adapted stimulus (Figure 1—figure supplement 4A). Since the increase in inhibition seems to be responsible for the absence of a dense novelty response, we hypothesized that a dense novelty responses might result from a disinhibitory signal as explored in Figure 7. Indeed, in this scenario we observed an increase in the response density in our model (Figure 7C). The disinhibitory signal increases the fraction of active neurons as the strength of disinhibition increases (Figure 7E). Therefore, we hypothesize that the dense novelty response observed experimentally by [Homann et al., 2017] could be achieved by disinhibitory feedback.

We included a new Supplementary Figure (Figure 1—figure supplement 4) in which we show single neuron statistics (the fraction of active excitatory and inhibitory neurons for different stimuli) as well as the complete spike raster during an entire sequence block. We discuss this in lines 162, 389 and in the Discussion section in line 557 Furthermore, to be consistent throughout our study, we modified the measurement of density in Figure 7C, E (we also measure the fraction of active excitatory neurons).

4. It is unclear how duration of pre-training relates to plasticity and novelty responses (Figure 4B). Would you expect increased pretraining duration affect the novelty response behavior? If so, perhaps the duration of pre-training should be varied systematically? Furthermore, how does the number of pre-training stimuli affect the performance?

Varying the pretraining duration and the number of pretraining stimuli do not qualitatively change the novelty response and its properties (Figure 4—figure supplement 1): Novelty responses can be reliably detected across a large range of varied pretraining parameters. The novelty peak height increases with increased number of stimulus repetitions during pretraining and decreases with the number of different stimuli presented.

We followed the reviewer's suggestion and tested the effect of two parameters of the pretraining (the number of repetitions of each stimulus and the total number of stimuli presented during the pretraining phase) on the novelty response. In the standard repeated sequence paradigm (Figure 3, 4), a total of 65 stimuli were presented (5 x 3 repeated + 5 x 10 novel stimuli). Therefore, this is the baseline number of stimuli presented during pretraining. The number of repetitions refers to how often each of these 65 stimuli is presented. When testing the effect of the number of stimuli we varied the total number of stimuli presented 5 times each during pre-training. For example, 105 means that 40 additional subsets of neurons were stimulated during pretraining that are not part of the consecutive repeated sequence stimulation paradigm. We found that increasing the number of repetitions in the pretraining phase increases the novelty peak height before reaching a plateau at around 10 repetitions (Figure 4—figure supplement 1A), and increases the average inhibitory synaptic weights onto stimulus assemblies (Figure 4—figure supplement 1B). Increasing the number of stimuli decreases the novelty peak height (Figure 4—figure supplement 1C), but has little influence on the average inhibitory synaptic weights onto stimulus assemblies (Figure 4—figure supplement 1D). In summary, we found that while the pretraining parameters affect some aspects of the novelty response, they do not qualitatively impact our results. Even without a pretraining phase (zero number of repetitions), a novelty response occurs.

We included a new Figure 4—figure supplement 1 and provide a discussion of our findings on the pretraining parameters in line 285, in the Discussions section in line 536 and in the Methods section in line 781.

5. It is difficult to interpret the differences in adaptation between different repeated stimuli (Figure 4C bottom -> green vs blue lines) as compared to the differences between the "late" and "intermediate" time points (4D bottom). Results depicted in figure 4D suggest multiple timescales of adaptation. Are the differences in 4C, which seem to be of similar magnitude, meaningful?

The differences in the magnitude in Figure 4C are not meaningful and result from randomness in the model.

This aspect is indeed very important and we agree that it needs to be clarified. For each instantiation of the same model with different initial connectivity and assembly size, one would get a different ordering of the three traces corresponding to stimuli A, B and C in Figure 4C (bottom). Specifically, these differences are due to small differences in the initial random connectivity before the pretraining phase, which get amplified during the simulation due to plasticity. In particular, the strength of activation of different excitatory assemblies varies, leading to the variable synaptic connection strengths shown in Figure 4C (bottom).

In contrast, the differences in the inhibitory weights in Figure 4D (bottom) are meaningful and suggest multiple timescales of adaptation. By averaging out the differences due to randomness in Figure 4C (bottom), we find a clear increasing trend of the average inhibitory weights onto sequence 1 at later time points (Figure 4D, bottom). For another instantiation of the same model with different initial connectivity and assembly size, one would get exactly the same ordering of the three traces corresponding to early, intermediate and late in Figure 4D. This can for example also be seen in Figure 1—figure supplement 3D, where we test more complex stimuli (different in the pretraining and stimulated phases). The main reason for the increase in Figure 4D (bottom) follows from the strengthening of average inhibitory weights onto a specific sequence as the frequency of sequence presentation increases: at later time points the sequence has been presented more frequently than at earlier times, leading to the increase in average inhibitory weights (see also our answer to comment 7 of reviewer 1 and comment 1 of reviewer 3).

We now provide an additional explanation in the Results section (see line 280).

6. One of the primary findings of Natan et al., 2015 was the differential effect of optogenetic disinhibition of SOM vs PV interneurons on the SSA response. The differential effect is seen when disinhibiting the standards -1, +1, +2 etc. relative to the deviant. It would be interesting to see what the effect of disinhibition is on the network response (Figure 6), and how it relates to Natan's findings.

Disinhibiting the standards at certain post-deviant stimulus time-points (-1, 0, +1, +2, etc) in our model led to an equal increase of the firing rate response to standard tones (as with suppressing PVs in [Natan et al., 2015]); and a reduction in the firing rate change of the deviant stimulus compared to the standard stimuli (as with suppressing SOM in [Natan et al., 2015]).

Following the suggestion from the reviewer, we performed similar experiments as Figure 5 from [Natan et al., 2015]. In the experiments, whenever a tone was played, either the SOM or PV cells were optogenetically suppressed. Since our model only has a single class of inhibitory interneurons, the suppression was always applied to the entire inhibitory population. In particular, we applied disinhibition (suppressed the entire inhibitory population) during the standard (or repeated) stimulus A at time points -1, +1, +2, +3, +4 relative to the deviant stimulus (called the post-deviant stimulus number), and the deviant (or novel) stimulus B (post-deviant stimulus number 0) (Author response image 1A). We found that suppressing the inhibitory population led to an equal increase in firing rates for all standard stimuli (post-deviant stimulus numbers -1, +1, +2, +3, +4) relative to the non-disinhibited case (Author response image 1B). This agrees with the results of [Natan et al., 2015] when PV interneurons were suppressed, but not when SOM interneurons were suppressed, where the increase in responses to standard stimuli was dependent on the post-deviant stimulus number (compare Author response image 1 with Figure 5B from [Natan et al., 2015]). In addition, we observed a reduction in the firing rate of the deviant stimulus (post-deviant stimulus number 0) compared to the standard stimuli (post-deviant stimulus numbers -1, +1, +2, +3, +4) (compare red and grey bars in Author response image 1B). This agrees with the results of [Natan et al., 2015] where SOM interneurons were suppressed (but note that in the experimental data, the deviant firing rate change was almost zero), but not when PV interneurons were suppressed. Therefore, our model captures some, but not all, aspects of the experimental data where multiple interneuron types were manipulated. Adding multiple interneuron subtypes (as in [Natan et al., 2015, Park and Geffen, 2020]) and possibly including different interneuron-specific plasticity rules [Agnes et al., 2020] is a promising line of future investigation.

Author response image 1. Disinhibiting the standard and deviant stimulus leads to differential increase in responses to standard and deviant tones.

Author response image 1.

A. Population firing rate of excitatory neurons in response to standard stimuli (gray) or to deviant stimuli (red) without disinhibition (dark colors) and with disinhibition (suppression of the total inhibitory population) (light colors). All responses are normalized to the response to the fourth non-disinhibited post-novelty stimulus of one instantiation of the model. B. Difference between firing rates with and without disinhibition from panel A in response to standard (gray) and deviant (red) stimuli. Error bars correspond to the standard deviation across three model instantiations.

We discuss this comparison in our manuscript line 651. In case the reviewers feel that the additional results from Author response image 1 would strengthen the manuscript and would make it easier to understand our study in the context of these previous experimental findings, we would be happy to include them in the manuscript.

7. The motivation for having different STDP learning rules for E->E and I->E connections is unclear, other than citation of previous work. Can you please explore in more detail the differences between these learning rules?

We now explore the motivation for using different learning rules for E-to-E and I-to-E connections in our manuscript. Our main motivation was to use biologically-inspired plasticity rules, which have important functional implications. For the plasticity of E-to-E and I-to-E connections multiple rules fulfil these requirements. Previous modelling studies have done extensive comparisons of the functional implications of multiple STDP rules. Hence, we did not implement different rules for our paradigm but discuss them in greater depth.

For excitatory-to-excitatory (E-to-E) synapses we used the triplet spike-timing-dependent plasticity (STDP) rule. More classically, the plasticity of E-to-E synapses has been argued to follow pair-based STDP, where the order and timing of pairs of spikes determines the induction of potentiation vs. depression [Bi and Poo, 1998]. Specifically, if a presynaptic spike comes before a postsynaptic spike the synapse is potentiated, while if a postsynaptic comes before a presynaptic spike then a synapse is depressed, as long as the timing between spikes is on the order of tens of milliseconds. However, such a pair-based STDP rule cannot explain plasticity where the frequency of pre- and postsynaptic spikes varies [Sjӧstrӧm et al., 2001]. To capture these data, the triplet STDP rule was proposed where a third spike modifies the amount of potentiation and depression evoked by pair-based STDP [Pfister and Gerstner, 2006]. Hence, triplets rather than pairs of spikes seem to be more appropriate as building blocks for synaptic plasticity for E-to-E synapses.

Functionally, the pair-based STDP rule cannot easily form neuronal assemblies; because pre-post spike pairs lead to potentiation and post-pre spike pairs lead to depression, the pair-based STDP rule leads to competition between reciprocal synapses, preventing the strengthening of bidirectional connections, and consequently self connected assemblies (but see e.g. [Babadi and Abbott, 2013]). Unlike the pair-based STDP rule, the triplet STDP rule supports the formation of bidirectional connections between neurons that experience correlated activity [Pfister and Gerstner, 2006, Gjorgjieva et al., 2011]. As a result, this rule can support the formation of self-connected assemblies in recurrent networks [Litwin-Kumar and Doiron, 2014, Zenke et al., 2015, Montangie et al., 2020]. Therefore, we used the triplet STDP rule for E-to-E connections to generate excitatory assemblies as a model for the different stimuli in a sequence (as shown in Figure 1—figure supplement 2C). Besides the triplet STDP rule, there exist also other E-to-E learning rules which allow for assembly formation. These include the voltage-based rule (see [Clopath et al., 2010] for details) and a calcium-based rule (see [Graupner and Brunel, 2010] for details). In a previous study, it has been shown that all three E-to-E learning rules allow for assembly formation (Figure 5 in [Litwin-Kumar and Doiron, 2014]).

For inhibitory-to-excitatory (I-to-E) synapses we used the inhibitory spike-timing-dependent plasticity (iSTDP) rule, initially suggested on theoretical grounds by [Vogels et al., 2011] for its ability to homeostatically stabilize firing rates in recurrent networks. According to this rule, the timing but not the order of pre- and postsynaptic spikes matters for the induction of synaptic plasticity. Hence, a pair of spikes that occurs within tens of milliseconds of each other can induce potentiation, and otherwise depression. This rule has been widely used in numerous computational models of recurrent networks to stabilize firing rate dynamics and balance excitation and inhibition ([Litwin-Kumar and Doiron, 2014, Zenke et al., 2015]; among others). More recently, this rule was also verified to operate in the auditory cortex of mice [D'amour and Froemke, 2015]. However, other inhibitory rules have also been measured experimentally, including classical Hebbian and anti-Hebbian (e.g. [Holmgren and Zilberter, 2001,Woodin et al., 2003, Haas et al., 2006]; for a review see [Hennequin et al., 2017]). Computational studies have started investigating the effect of such different inhibitory learning rules (see [Luz and Shamir, 2012, Kleberg et al., 2014]).

The generation of adapted and novelty responses in our model depends on a `negative feedback' mechanism of inhibitory plasticity (see also our response to comment 2 of reviewer 2). As long as inhibitory synapses potentiate in response to high excitatory firing rates, and decrease in response to low excitatory firing rates, the firing rates in response to repeated stimuli will decrease (i.e. adapt). Therefore, we expect that any inhibitory plasticity rule which incorporates a negative feedback mechanism will lead to the adaptation of responses to repeated stimuli. Another such candidate (besides the iSTDP rule we used) is a classical Hebbian inhibitory plasticity rule [Luz and Shamir, 2012]. We further explain the choice of inhibitory plasticity rule in response to comment 2 of reviewer 2, and comment 1 of reviewer 3.

We have included an additional supplementary figure to clarify our choice of the STDP functions for excitatory and inhibitory plasticity (Figure 1—figure supplement 1). To justify our choice of synaptic plasticity rules, we added text in the Results line 115, Methods (lines 709, 724 and 803) and Discussions (line 451).

8. An emerging theme from this paper and from other models such as Yarden and Nelken, 2017 and Park and Geffen, 2020, may be that the tonotopic organization of similarly tuned neurons helps facilitate adaptation. Tuned assemblies were a key feature of the models in these papers. Here, the tonotopic organization arises from the STDP rule here, and it would be interesting to discuss the relationship between tonotopic organization and plasticity.

In our model, tonotopy, or more generally topography, can emerge from the triplet excitatory STDP rule, but does not affect our modeling results and predictions.

Neuronal assemblies tuned to sensory stimuli are a key feature in our model. However, in its current implementation we do not have topographic organization of the assemblies { we refer to it as topographic here, because our model is sufficiently general to apply to the auditory system (tonotopy) as well as the visual system (retinotopy), which in fact was the main inspiration for our model (experiments in [Homann et al., 2017]). In our model, neuronal assemblies represent strongly connected subsets of excitatory neurons which form due to the functional properties of the triplet STDP shaping the plasticity between excitatory neurons. As we discussed previously in our answer to comment 7, the triplet STDP rule can strengthen connections between two neurons bidirectionally. Hence, the rule allows the formation of neuronal assemblies of neurons which receive similar inputs and experience correlated activity (see our previous work: [Gjorgjieva et al., 2011, Montangie et al., 2020]). We interpret these assemblies as being tuned to a given stimulus (e.g. orientated bar, or the frequency of a sound).

We could introduce topography (retinotopy or tonotopy) in our model by allowing our assemblies to overlap in a structured way. For example, we could define assembly 1 (coding for a given frequency) to have 50% overlap with assembly 2 (coding for a similar frequency), assembly 2 to have 50% overlap with assembly 3, etc. Qualitatively, introducing such topography would not affect our findings on the generation of adapted and novelty responses, and yet add additional complexity, so we decided to not include it.

We have expanded the discussion on assembly formation and how it could generate topography in our model (see lines 803 and 809). For the reasons mentioned above we have decided to not include topography in the model.

9. Seay et al., 2020, have considered plasticity rules in adaptation as a function of facilitation of specific inhibitory interneurons in SSA. It would be interesting to speculate whether and how this model and the learning rules relates to those published results.

Multiple differences in the model-set up between our model and the model in [Seay et al., 2020] make a direct comparison difficult. However, combining the mechanisms used in both studies is a promising future direction.

Using a computational model, [Seay et al., 2020] demonstrate that different experimentally measured short-term plasticity at the synapses from PV and SOM interneurons onto pyramidal neurons can account for diverse responses in the auditory cortex, from adapted to facilitated responses. In contrast in our study, we focus on long-term inhibitory plasticity and specifically on the generation of novelty responses. In addition, [Seay et al., 2020] study a feedforward circuit experiencing activity and plasticity on rather short timescales (e.g. 400 ms in the n+1 experiment), while we study a recurrent circuit operating on longer timescales (seconds to minutes, see Figure 4). These differences make it difficult to relate the two models (see also our answer to comment 3 of reviewer 3). However, in future work it would be interesting to combine the mechanism of inhibitory long-term plasticity that we implement, with the diverse short-term plasticity mechanisms from [Seay et al., 2020].

We added the reference on multiple occasions in our manuscript and discuss it on line 499.

Reviewer #2 (Recommendations for the authors):

The units of the learning rate n of inhibitory plasticity on page 18 is listed in uF and only very briefly discussed? Since eta seems to be relatively important for the novelty detection, and determines e.g. how many trials a system will typically require to learn a stimulus (see also Figure 2) it should thus be discussed in more detail, possibly with an additional set of simulations that are complementary to those in Figure 2.

The timescale of inhibitory plasticity (η) is an important parameter in our model, which strongly influences the response amplitude and the decay time constant of the novelty response (Figure 4—figure supplement 2).

We followed the suggestion of the reviewer and systematically varied the learning rate η to see how it affects the novelty response. We found that the timescale of inhibitory plasticity needs to be `sufficiently fast' for the generation of the novelty response. For learning rates below η = 0:5 pF we no longer observe adaptation to repeated stimuli nor a novelty response (Figure 4—figure supplement 2A). The response amplitude of the novelty response increases (Figure 4—figure supplement 2B), while the decay time constant decreases (Figure 4—figure supplement 2C) with increasing inhibitory learning rate.

We have added new text in the Discussion under the subsections `Timescales of plasticity mechanisms' (line 509) and `Robustness of the model' (line 533) where we discuss the timescales of inhibitory plasticity. We also included a new supplementary figure (Figure 4—figure supplement 2) where we demonstrate the effect of varying η (see also our answer to comment 1 of reviewer 3), which we discuss in line 293 and mention in the Methods in line 739.

The inhibitory learning rule used here is stated to be confirmed on p. 15. This is arguable, and the jury may still be out. It may be worth discussing how other rules would perform?

Indeed, several inhibitory learning rules have been measured experimentally. We based our work on the iSTDP rule, originally proposed on a theoretical basis to homeostatically stabilize firing rate dynamics in recurrent networks [Vogels et al., 2011], and measured experimentally in the auditory cortex by [D'amour and Froemke, 2015]. This rule is symmetric in that the order of pre- and postsynaptic spikes does not matter for the induction of plasticity, only their timing. Other inhibitory rules have also been measured experimentally, including classical Hebbian and anti-Hebbian (e.g. [Holmgren and Zilberter, 2001,Woodin et al., 2003, Haas et al., 2006]; for a review see [Hennequin et al., 2017]). Furthermore, the learning rules seem to depend on the type of the interneuron [Udakis et al., 2020].

Computational studies have started investigating the effect of different inhibitory learning rules, albeit primarily in feedforward networks [Luz and Shamir, 2012, Kleberg et al., 2014]. The generation of adapted and novel responses in our model depends on a `negative feedback' mechanism of inhibitory plasticity. As long as inhibitory synapses potentiate in response to high excitatory firing rates, and decrease in response to low excitatory firing rates, the firing rates in response to repeated stimuli will decrease. Therefore, we expect that any inhibitory plasticity rule which incorporates a negative feedback mechanism would lead to adaptation of the responses to familiar stimuli. [Luz and Shamir, 2012] demonstrate that also a classical Hebbian inhibitory plasticity rule can implement negative feedback. See also our answer to related questions at comment 7 of reviewer 1 and comment 1 of reviewer 3.

We elaborate on this point in the Methods section, and mention other experimentally measured inhibitory learning rules (line 724) and their computational properties (line 451).

Reviewer #3 (Recommendations for the authors):

1. The abstract is a bit cryptic: it states that "inhibitory synaptic plasticity readily generates novelty responses". Inhibitory plasticity is rather vague, it is necessary to read quite far into the results to figure out if the key is short-term synaptic inhibitory plasticity as in a number of models, or some form of long-term associative synaptic plasticity. The abstract should be more informative and explicitly state that the model relies on STDP of Inh->Ex synapse.

We modified the abstract as suggested by the reviewer and state that inhibitory spike-timing dependent plasticity is the underlying mechanism of adaptation in our model.

2. It would be helpful to provide a bit more information about the model architecture in Figure 1 and the start of the Results section so the reader does not immediately have to go to the Methods section (e.g., network size, ratio of E/I neurons, no STP).

We included further details of the model at the start of the Results section, see line 111.

3. In the Methods it would also be helpful to plot the STDP function for excitatory and inhibitory plasticity for spike pairs.

We added a new supplementary figure (Figure 1—figure supplement 1) which shows the STDP function for excitatory and inhibitory plasticity for spike pairs and mention it in lines 117 and 739. We did this for spike pairs of different frequencies because this is what makes the triplet STDP rule that we use very different from the more classical pair-based STDP rule.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Transparent reporting form

    Data Availability Statement

    The code to reproduce the figures for this paper has been uploaded on GitHub and be accessed here: https://github.com/comp-neural-circuits/novelty-via-inhibitory-plasticity (copy archived at https://archive.softwareheritage.org/swh:1:rev:d368b14a2368925b290923c2c11411d7b7a40bd1).


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES