Abstract
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.
Keywords: accelerated neuromorphic hardware system, universal computing substrate, highly configurable, mixed-signal VLSI, spiking neural networks, soft winner-take-all, classifier, cortical model
1. Introduction
By nature, computational neuroscience has a high demand for powerful and efficient devices for simulating neural network models. In contrast to conventional general purpose machines based on a von-Neumann architecture, neuromorphic systems are, in a rather broad definition, a class of devices which implement particular features of biological neural networks in their physical circuit layout (Mead, 1989; Indiveri et al., 2009; Renaud et al., 2010). In order to discern more easily between computational substrates, the term emulation is generally used when referring to neural networks running on a neuromorphic back-end.
Several aspects motivate the neuromorphic approach. The arguably most characteristic feature of neuromorphic devices is inherent parallelism enabled by the fact that individual neural network components (essentially neurons and synapses) are physically implemented in silico. Due to this parallelism, scaling of emulated network models does not imply slowdown, as is usually the case for conventional machines. The hard upper bound in network size (given by the number of available components on the neuromorphic device) can be broken by scaling of the devices themselves, e.g., by wafer-scale integration (Schemmel et al., 2010) or massively interconnected chips (Merolla et al., 2011). Emulations can be further accelerated by scaling down time constants compared to biology, which is enabled by deep submicron technology (Schemmel et al., 2006, 2010; Brüderle et al., 2011). Unlike high-throughput computing with accelerated systems, real-time systems are often specialized for low power operation (e.g., Farquhar and Hasler, 2005; Indiveri et al., 2006).
However, in contrast to the unlimited model flexibility offered by conventional simulation, the network topology, and parameter space of neuromorphic systems are often dedicated for predefined applications and therefore rather restricted (e.g., Merolla and Boahen, 2006; Serrano-Gotarredona et al., 2006; Akay, 2007; Chicca et al., 2007). Enlarging the configuration space always comes at the cost of hardware resources by occupying additional chip area. Consequently, the maximum network size is reduced, or the configurability of one aspect is decreased by increasing the configurability of another. Still, configurability costs can be counterbalanced by decreasing precision. This could concern the size of integration time steps (Imam et al., 2012a), the granularity of particular parameters (Pfeil et al., 2012), or fixed-pattern noise affecting various network components. At least the latter can be, to some extent, moderated through elaborate calibration methods (Neftci and Indiveri, 2010; Brüderle et al., 2011; Gao et al., 2012).
In this study, we present a user-friendly integrated development environment that can serve as a universal neuromorphic substrate for emulating different types of neural networks. Apart from almost arbitrary network topologies, this system provides a vast configuration space for neuron and synapse parameters (Schemmel et al., 2006; Brüderle et al., 2011). Reconfiguration is achieved on-chip and does not require additional support hardware. While some models can easily be transferred from software simulations to the neuromorphic substrate, others need modifications. These modifications take into account the limited hardware resources and compensate for fixed-pattern noise (Brüderle et al., 2009, 2010, 2011; Kaplan et al., 2009; Bill et al., 2010). In the following, we show six more networks emulated on our hardware system, each requiring its own hardware configuration in terms of network topology and neuronal as well as synaptic parameters.
2. The Neuromorphic System
The central component of our neuromorphic hardware system is the neuromorphic microchip Spikey. It contains analog very-large-scale integration (VLSI) circuits modeling the electrical behavior of neurons and synapses (Figure 1). In such a physical model, measurable quantities in the neuromorphic circuitry have corresponding biological equivalents. For example, the membrane potential Vm of a neuron is modeled by the voltage over a capacitor Cm that, in turn, can be seen as a model of the capacitance of the cell membrane. In contrast to numerical approaches, dynamics of physical quantities like Vm evolve continuously in time. We designed our hardware systems to have time constants approximately 104 times faster than their biological counterparts allowing for high-throughput computing. This is achieved by reducing the size and hence the time constant of electrical components, which also allows for more neurons and synapses on a single chip. To avoid confusion between hardware and biological domains of time, voltages, and currents, all parameters are specified in biological domains throughout this study.
2.1. The neuromorphic chip
On Spikey (Figure 1), a VLSI version of the standard leaky integrate-and-fire (LIF) neuron model with conductance-based synapses is implemented (Dayan and Abbott, 2001):
(1) |
For its hardware implementation, see Figure 1 (Schemmel et al., 2006; Indiveri et al., 2011).
Synaptic conductances gi (with the index i running over all synapses) drive the membrane potential Vm toward the reversal potential Ei, with Ei ∈ {Eexc, Einh}. The time course of the synaptic activation is modeled by
(2) |
where are the maximum conductances and wi the weights for each synapse, respectively. The time course pi(t) of synaptic conductances is a linear transformation of the current pulses shown in Figure 1 (green), and hence an exponentially decaying function of time. The generation of conductances at the neuron side is described in detail by Indiveri et al. (2011), postsynaptic potentials are measured by Schemmel et al. (2007).
The implementation of spike timing dependent plasticity (STDP; Bi and Poo, 1998; Song et al., 2000) modulating wi over time is described in Schemmel et al. (2006) and Pfeil et al. (2012). Correlation measurement between pre- and post-synaptic action potentials is carried out in each synapse, and the 4-bit weight is updated by an on-chip controller located in the digital part of the Spikey chip. However, STDP will not be further discussed in this study.
Short-term plasticity (STP) modulates (Schemmel et al., 2007) similar to the model by Tsodyks and Markram (1997) and Markram et al. (1998). On hardware, STP can be configured individually for each synapse line driver that corresponds to an axonal connection in biological terms. It can either be facilitating or depressing.
The propagation of spikes within the Spikey chip is illustrated in Figure 1 and described in detail by Schemmel et al. (2006). Spikes enter the chip as time-stamped events using standard digital signaling techniques that facilitate long-range communication, e.g., to the host computer or other chips. Such digital packets are processed in discrete time in the digital part of the chip, where they are transformed into digital pulses entering the synapse line driver (blue in Figure 1A). These pulses propagate in continuous time between on-chip neurons, and are optionally transformed back into digital spike packets for off-chip communication.
2.2. System environment
The Spikey chip is mounted on a network module described and schematized in Fieres et al. (2004) and Figure 2, respectively. Digital spike and configuration data is transferred via direct connections between a field-programmable gate array (FPGA) and the Spikey chip. Onboard digital-to-analog converter (DAC) and analog-to-digital converter (ADC) components supply external parameter voltages to the Spikey chip and digitize selected voltages generated by the chip for calibration purposes. Furthermore, up to eight selected membrane voltages can be recorded in parallel by an oscilloscope. Because communication between a host computer and the FPGA has a limited bandwidth that does not satisfy real-time operation requirements of the Spikey chip, experiment execution is controlled by the FPGA while operating the Spikey chip in continuous time. To this end, all experiment data is stored in the local random access memory (RAM) of the network module. Once the experiment data is transferred to the local RAM, emulations run with an acceleration factor of 104 compared to biological real-time. This acceleration factor applies to all emulations shown in this study, independent of the size of networks.
Execution of an experiment is split up into three steps (Figure 2). First, the control software within the memory of the host computer generates configuration data (Table 1, e.g., synaptic weights, network connectivity, etc.), as well as input stimuli to the network. All data is stored as a sequence of commands and is transferred to the memory on the network module. In the second step, a playback sequencer in the FPGA logic interprets this data and sends it to the Spikey chip, as well as triggers the emulation. Data produced by the chip, e.g., neuronal activity in terms of spike times, is recorded in parallel. In the third and final step, this recorded data stored in the memory on the network module is retrieved and transmitted to the host computer, where they are processed by the control software.
Table 1.
Scope | Name | Type | Description |
---|---|---|---|
Neuron circuits (A) | n/a | in | Two digital configuration bits activating the neuron and readout of its membrane voltage |
gl | in | Bias current for neuron leakage circuit | |
τrefrac | in | Bias current controlling neuron refractory time | |
El | sn | Leakage reversal potential | |
Einh | sn | Inhibitory reversal potential | |
Eexc | sn | Excitatory reversal potential | |
Vth | sn | Firing threshold voltage | |
Vreset | sn | Reset potential | |
Synapse line drivers (B) | n/a | il | Two digital configuration bits selecting input of line driver |
n/a | il | Two digital configuration bits setting line excitatory or inhibitory | |
trise, tfall | il | Two bias currents for rising and falling slew rate of presynaptic voltage ramp | |
il | Bias current controlling maximum voltage of presynaptic voltage ramp | ||
Synapses (B) | w | is | 4-Bit weight of each individual synapse |
STP related (C) | n/a | il | Two digital configuration bits selecting short-term depression or facilitation |
USE | il | Two digital configuration bits tuning synaptic efficacy for STP | |
n/a | sl | Bias voltage controlling spike driver pulse length | |
τrec, τfacil | sl | Voltage controlling STP time constant | |
I | sl | Short-term facilitation reference voltage | |
R | sl | Short-term capacitor high potential | |
STDP related (D) | n/a | il | Bias current controlling delay for presynaptic correlation pulse (for calibration purposes) |
A± | sl | Two voltages dimensioning charge accumulation per (anti-)causal correlation measurement | |
n/a | sl | Two threshold voltages for detection of relevant (anti-)causal correlation | |
τSTDP | g | Voltage controlling STDP time constants |
For each hardware parameter the corresponding model parameter names are listed, excluding technical parameters that are only relevant for correctly biasing analog support circuitry or controlling digital chip functionality. Electronic parameters that have no direct translation to model parameters are denoted n/a. The membrane capacitance is fixed and identical for all neuron circuits (Cm = 0.2 nF in biological value domain). Parameter types: (i) controllable for each corresponding circuit: 192 for neuron circuits (denoted with subscript n), 256 for synapse line drivers (denoted with subscript l), 49,152 for synapses (denoted with subscript s), (s) two values, shared for all even/odd neuron circuits or synapse line drivers, respectively, (g) global, one value for all corresponding circuits on the chip. All numbers refer to circuits associated to one synapse array and are doubled for the whole chip. For technical reasons, the current revision of the chip only allows usage of one synapse array of the chip. Therefore, all experiments presented in this paper are limited to a maximum of 192 neurons. For parameters denoted by (A) see equation (1) and Schemmel et al. (2006), for (B), see Figure 1, equation (2), and Dayan and Abbott (2001), for (C) see Schemmel et al. (2007), and for (D) see Schemmel et al. (2006) and Pfeil et al. (2012).
Having a control software that abstracts hardware greatly simplifies modeling on the neuromorphic hardware system. However, modelers are already struggling with multiple incompatible interfaces to software simulators. That is why our neuromorphic hardware system supports PyNN, a widely used application programming interface (API) that strives for a coherent user interface, allowing portability of neural network models between different software simulation frameworks (e.g., NEST or NEURON) and hardware systems (e.g., the Spikey system). For details see Gewaltig and Diesmann (2007); Eppler et al. (2009) for NEST, Carnevale and Hines (2006); Hines et al. (2009) for NEURON, Brüderle et al. (2009, 2011) for the Spikey chip, and Davison et al. (2009, 2010) for PyNN, respectively.
2.3. Configurability
In order to facilitate the emulation of network models inspired by biological neural structures, it is essential to support the implementation of different (cortical) neuron types. From a mathematical perspective, this can be achieved by varying the appropriate parameters of the implemented neuron model [equation (1)].
To this end, the Spikey chip provides 2969 different analog parameters (Table 1) stored on current memory cells that are continuously refreshed from a digital on-chip memory. Most of these cells deliver individual parameters for each neuron (or synapse line driver), e.g., leakage conductances gl. Due to the size of the current-voltage conversion circuitry it was not possible to provide individual voltage parameters, such as, e.g., El, Eexc, and Einh, for each neuron. As a consequence, groups of 96 neurons share most of these voltage parameters. Parameters that can not be controlled individually are delivered by global current memory cells.
In addition to the possibility of controlling analog parameters, the Spikey chip also offers an almost arbitrary configurability of the network topology. As illustrated in Figure 1, the fully configurable synapse array allows connections from synapse line drivers (located alongside the array) to arbitrary neurons (located below the array) via synapses whose weights can be set individually with a 4-bit resolution. This limits the maximum fan-in to 256 synapses per neuron, which can be composed of up to 192 synapses from on-chip neurons, and up to 256 synapses from external spike sources. Because the total number of neurons exceeds the number of inputs per neuron, an all-to-all connectivity is not possible. For all networks presented in this study, the connection density is much lower than realizable on the chip, which supports the chosen trade-off between inputs per neuron and total neuron count.
2.4. Calibration
Device mismatch that arises from hardware production variability causes fixed-pattern noise, which causes parameters to vary from neuron to neuron as well as from synapse to synapse. Electronic noise (including thermal noise) also affects dynamic variables, as, e.g., the membrane potential Vm. Consequently, experiments will exhibit some amount of both neuron to neuron and trial-to-trial variability given the same input stimulus. It is, however, important to note that these types of variations are not unlike the neuron diversity and response stochasticity found in biology (Gupta et al., 2000; Maass et al., 2002; Marder and Goaillard, 2006; Rolls and Deco, 2010).
To facilitate modeling and provide repeatability of experiments on arbitrary Spikey chips, it is essential to minimize these effects by calibration routines. Many calibrations have directly corresponding biological model parameters, e.g., membrane time constants (described in the following), firing thresholds, synaptic efficacies, or PSP shapes. Others have no equivalents, like compensations for shared parameters or workarounds of defects (e.g., Kaplan et al., 2009; Bill et al., 2010; Pfeil et al., 2012). In general, calibration results are used to improve the mapping between biological input parameters and the corresponding target hardware voltages and currents, as well as to determine the dynamic range of all model parameters (e.g., Brüderle et al., 2009).
While the calibration of most parameters is rather technical, but straightforward (e.g., all neuron voltage parameters), some require more elaborate techniques. These include the calibration of τm, STP as well as synapse line drivers, as we describe later for individual network models. The membrane time constant τm = Cm/gl differs from neuron to neuron mostly due to variations in the leakage conductance gl. However, gl is independently adjustable for every neuron. Because this conductance is not directly measurable, an indirect calibration method is employed. To this end, the threshold potential is set below the resting potential. Following each spike, the membrane potential is clamped to Vreset for an absolute refractory time τrefrac, after which it evolves exponentially toward the resting potential El until the threshold voltage triggers a spike and the next cycle begins. If the threshold voltage is set to Vth = El − 1/e·(El − Vreset), the spike frequency equals 1/(τm + τrefrac), thereby allowing an indirect measurement and calibration of gl and therefore τm. For a given τm and τrefrac = const, Vth can be calculated. An iterative method is applied to find the best-matching Vth, because the exact hardware values for El, Vreset, and Vth are only known after the measurement. The effect of calibration on a typical chip can best be exemplified for a typical target value of τm = 10 ms. Figure 3 depicts the distribution of τm of a typical chip before and after calibration.
The STP hardware parameters have no direct translation to model equivalents. In fact, the implemented transconductance amplifier tends to easily saturate within the available hardware parameter ranges. These non-linear saturation effects can be hard to handle in an automated fashion on an individual circuit basis. Consequently, the translation of these parameters is based on STP courses averaged over several circuits.
3. Hardware Emulation of Neural Networks
In the following, we present six neural network models that have been emulated on the Spikey chip. Most of the emulation results are compared to those obtained by software simulations in order to verify the network functionality and performance. For all these simulations the tool NEST (Gewaltig and Diesmann, 2007) or NEURON (Carnevale and Hines, 2006) is used.
3.1. Synfire chain with feedforward inhibition
Architectures with a feedforward connectivity have been employed extensively as computational components and as models for the study of neuronal dynamics. Synfire chains are feedforward networks consisting of several neuron groups where each neuron in a group projects to neurons in the succeeding group.
They have been originally proposed to account for the presence of behaviorally related, highly precise firing patterns (Prut et al., 1998; Baker et al., 2001). Further properties of such structures have been studied extensively, including activity transport (Aertsen et al., 1996; Diesmann et al., 1999; Litvak et al., 2003), external control of information flow (Kremkow et al., 2010a), computational capabilities (Abeles et al., 2004; Vogels and Abbott, 2005; Schrader et al., 2010), complex dynamic behavior (Yazdanbakhsh et al., 2002), and their embedding into surrounding networks (Aviel et al., 2003; Tetzlaff et al., 2005; Schrader et al., 2008). Kremkow et al. (2010b) have shown that feedforward inhibition can increase the selectivity to the initial stimulus and that the local delay of inhibition can modify this selectivity.
3.1.1. Network topology
The presented network model is an adaptation of the feedforward network described in Kremkow et al. (2010b).
The network consists of several neuron groups, each comprising nRS = 100 excitatory regular spiking (RS) and nFS = 25 inhibitory fast spiking (FS) cells. All neurons are modeled as LIF neurons with exponentially decaying synaptic conductance courses. According to Kremkow et al. (2010b) all neurons have identical parameters.
As shown in Figure 4A, RS neurons project to both RS and FS populations in the subsequent group while the FS population projects to the RS population in its local group. Each neuron receives a fixed number of randomly chosen inputs from each presynaptic population. The first group is stimulated by a population of nRS external spike sources with identical connection probabilities as used for RS groups within the chain.
Two different criteria are employed to assess the functionality of the emulated synfire chain. The first, straightforward benchmark is the stability of signal propagation. An initial synchronous stimulus is expected to cause a stable propagation of activity, with each neuron in an RS population spiking exactly once. Deviations from the original network parameters can cause the activity to grow rapidly, i.e., each population emits more spikes than its predecessor, or stall pulse propagation.
The second, broader characterization follows Kremkow et al. (2010b), who has analyzed the response of the network to various stimuli. The stimulus is parametrized by the variables a and σ. For each neuron in the stimulus population a spike times are generated by sampling them from a Gaussian distribution with common mean and standard deviation. σ is defined as the standard deviation of the spike times of all source neurons. Spiking activity that is evoked in the subsequent RS populations is characterized analogously by measuring a and σ.
Figure 4C shows the result of a software simulation of the original network. The filter properties of the network are reflected by a separatrix dividing the state space shown in Figures 4C,D into two areas, each with a different fixed point. First, the basin of attraction (dominated by red circles in Figure 4C) from which stable propagation can be evoked and second, the remaining region (dominated by crosses in Figure 4C) where any initial activity becomes extinguished. This separatrix determines which types of initial input lead to a stable signal propagation.
3.1.2. Hardware emulation
The original network model could not be mapped directly to the Spikey chip because it requires 125 neurons per group, while on the chip only 192 neuron circuits are available. Further constraints were caused by the fixed synaptic delays, which are determined by the speed of signal propagation on the chip. The magnitude of the delay is approximately 1 ms in biological time.
By simple modifications of the network, we were able to qualitatively reproduce both benchmarks defined in Section 3.1.1. Two different network configurations were used, each adjusted to the requirements of one benchmark. In the following, we describe these differences, as well as the results for each benchmark.
To demonstrate a stable propagation of pulses, a large number of consecutive group activations was needed. The chain was configured as a loop by connecting the last group to the first, allowing the observation of more pulse packet propagations than there are groups in the network.
The time between two passes of the pulse packet at the same synfire group needs to be maximized to allow the neurons to recover (see voltage trace in Figure 4B). This is accomplished by increasing the group count and consequently reducing the group size. As too small populations cause an unreliable signal propagation, which is mainly caused by inhomogeneities in the neuron behavior, nRS = nFS = 8 was chosen as a satisfactory trade-off between propagation stability and group size. Likewise, the proportion of FS neurons in a group was increased to maintain a reliable inhibition. To further improve propagation properties, the membrane time constant was lowered for all neurons by raising gl to its maximum value. The strength of inhibition was increased by setting the inhibitory synaptic weight to its maximum value and lowering the inhibitory reversal potential to its minimum value. Finally, the synaptic weights RSi → RSi+1 and RSi → FSi+1 were adjusted. With these improvements we could observe persisting synfire propagation on the oscilloscope 2 h wall-clock time after stimulation. This corresponds to more than 2 years in biological real-time.
The second network demonstrates the filtering properties of a hardware-emulated synfire chain with feedforward inhibition. This use case required larger synfire groups than in the first case as otherwise, the total excitatory conductance caused by a pulse packet with large σ was usually not smooth enough due to the low number of spikes. Thus, three groups were placed on a single chip with nRS = 45 and nFS = 18. The resulting evolution of pulse packets is shown in Figure 4D. After passing three groups, most runs resulted in either very low activity in the last group or were located near the point (0.3 ms, 1).
Emulations on hardware differ from software simulations in two important points: first, the separation in the parameter space of the initial stimulus is not as sharply bounded, which is demonstrated by the fact that occasionally, significant activity in the last group can be evoked by stimuli with large σ and large a, as seen in Figure 4D. This is a combined effect due to the reduced population sizes and the fixed-pattern noise in the neuronal and synaptic circuits. Second, a stimulus with a small a can evoke weak activity in the last group, which is attributed to a differing balance between excitation and inhibition. In hardware, a weak stimulus causes both, the RS and FS populations to response weakly which leads to a weak inhibition of the RS population, allowing the pulse to reach the last synfire group. Hence, the pulse fades slowly instead of being extinguished completely. In the original model, the FS population is more responsive and prevents the propagation more efficiently.
Nevertheless, the filtering properties of the network are apparent. The quality of the filter could be improved by employing the original group size, which would require using a large-scale neuromorphic device (see, e.g., Schemmel et al., 2010).
Our hardware implementation of the synfire chain model demonstrates the possibility to run extremely long lasting experiments due to the high acceleration factor of the hardware system. Because the synfire chain model itself does not require sustained external stimulus, it could be employed as an autonomous source of periodic input to other experiments.
3.2. Balanced random network
Brunel (2000) reports balanced random networks (BRNs) exhibiting, among others, asynchronous irregular network states with stationary global activity.
3.2.1. Network topology
BRNs consist of an inhibitory and excitatory population of neurons, both receiving feedforward connections from two populations of Poisson processes mimicking background activity. Both neuron populations are recurrently connected including connections within the populations. All connections are realized with random and sparse connections of probability p. In this study, synaptic weights for inhibitory connections are chosen four times larger than those for excitatory ones. In contrast to the original implementation using 12,500 neurons, we scaled this network by a factor of 100 while preserving its firing behavior.
If single cells fire irregularly, the coefficient of variation
(3) |
of interspike intervals has values close to or higher than one (Dayan and Abbott, 2001). and σT are the mean and standard deviation of these intervals. Synchrony between two cells can be measured by calculating the correlation coefficient
(4) |
of their spike trains n1 and n2, respectively (Perkel et al., 1967). The variance (var) and covariance (cov) are calculated by using time bins with 2 ms duration (Kumar et al., 2008).
Brüderle et al. (2010) have shown another approach to investigate networks inspired by Brunel (2000). Their focus have been the effects of network parameters and STP on the firing rate of the network. In our study, we show that such BRNs can show an asynchronous irregular network state, when emulated on hardware.
3.2.2. Hardware emulation
In addition to standard calibration routines (Section 2.4), we have calibrated the chip explicitly for the BRN shown in Figure 5A. In the first of two steps, excitatory and inhibitory synapse line drivers were calibrated sequentially toward equal strength, respectively, but with inhibition four times stronger than excitation. To this end, all available neurons received spiking activity from a single synapse line driver, thereby averaging out neuron to neuron variations. The shape of synaptic conductances (specifically tfall and ) were adjusted to obtain a target mean firing rate of 10 Hz over all neurons. Similarly, each driver was calibrated for its inhibitory operation mode. All neurons were strongly stimulated by an additional driver with its excitatory mode already calibrated, and again the shape of conductances, this time for inhibition, was adjusted to obtain the target rate.
Untouched by this prior calibration toward a target mean rate, neuron excitability still varied between neurons and was calibrated consecutively for each neuron in a second calibration step. For this, all neurons of the BRN were used to stimulate a single neuron with a total firing rate that was uniformly distributed among all inputs and equal to the estimated firing rate of the final network implementation. Subsequently, all afferent synaptic weights to this neuron were scaled in order to adapt its firing rate to the target rate.
To avoid a self-reinforcement of network activity observed in emulations on the hardware, efferent connections of the excitatory neuron population were modeled as short-term depressing. Nevertheless, such BRNs still show an asynchronous irregular network state (Figure 5B).
Figure 5C show recordings of a BRN emulation on a calibrated chip with neurons firing irregularly and asynchronously. Note that CV ≥ 1 does not necessarily guarantee an exponential interspike interval distribution and even less Poisson firing. However, neurons within the BRN clearly exhibit irregular firing (compare raster plots of Figures 5B,C).
A simulation of the same network topology and stimulus using software tools produced similar results. Synaptic weights were not known for the hardware emulation, but defined by the target firing rates using the above calibration. A translation to biological parameters is possible, but would have required further measurements and was not of further interest in this context. Instead, for software simulations, the synaptic weight for excitatory connections were chosen to fit the mean firing rate of the hardware emulation (approximately 9 Hz). Then, the weight of inhibitory connections were chosen to preserve the ratio between inhibitory and excitatory weights.
Membrane dynamics of single neurons within the network are comparable between hardware emulations and software simulations (Figures 5B,C). Evidently, spike times differ between the two approaches due to various hardware noise sources (Section 2.4). However, in “large” populations of neurons (Ne + Ni = 125 neurons), we observe that these phenomena have qualitatively no effect on firing statistics, which are comparable to software simulations (compare raster plots of Figures 5B,C). The ability to reproduce these statistics is highly relevant in the context of cortical models which rely on asynchronous irregular firing activity for information processing (e.g., van Vreeswijk and Sompolinsky, 1996).
3.3. Soft winner-take-all network
Soft winner-take-all (sWTA) computation is often viewed as an underlying principle in models of cortical processing (Grossberg, 1973; Maass, 2000; Itti and Koch, 2001; Douglas and Martin, 2004; Oster et al., 2009; Lundqvist et al., 2010). The sWTA architecture has many practical applications, for example contrast enhancement, or making a decision which of two concurrent inputs is larger. Many neuromorphic systems explicitly implement sWTA architectures (Lazzaro et al., 1988; Chicca et al., 2007; Neftci et al., 2011).
3.3.1. Network topology
We implemented an sWTA network that is composed of a ring-shaped layer of recurrently connected excitatory and a common pool of inhibitory neurons (Figure 6A), following the implementation by Neftci et al. (2011). Excitatory neurons project to the common inhibitory pool and receive recurrent feedback from there. In addition, excitatory neurons have recurrent excitatory connections to their neighbors on the ring. The strength of these decays with increasing distance on the ring, following a Gaussian profile with a standard deviation of σrec = 5 neurons. External stimulation is also received through a Gaussian profile, with the mean μext expressing the neuron index that receives input with maximum synaptic strength. Synaptic input weights to neighbors of that neuron decay according to a standard deviation of σext = 3 neurons. We clipped the input weights to zero beyond σext · 3. Each neuron located within the latter Gaussian profile receives stimulation from five independent Poisson spike sources each firing at rate r. Depending on the contrast between the input firing rates r1 and r2 of two stimuli applied to opposing sides of the ring, one side of the ring “wins” by firing with a higher rate and thereby suppressing the other.
3.3.2. Hardware emulation
We assessed the efficiency of this sWTA circuit by measuring the reduction in firing rate exerted in neurons when the opposite side of the ring is stimulated. We stimulated one side of the ring with a constant, and the opposite side with a varying firing rate. In case of hardware emulations, each stimulus was distributed and hence averaged over multiple line drivers in order to equalize stimulation strength among neurons. For both back-ends, inhibitory weights were chosen four times stronger than excitatory ones (using the synapse line driver calibration of Section 3.2).
The firing rate of the reference side decreased when the firing rate of stimulation to the opposite side was increased, both in software simulation and on the hardware (Figures 6B,C). In both cases, the average firing rates crossed at approximately r2 = 50 Hz, corresponding to the spike rate delivered to the reference side. The firing rates rtot are less distinctive for hardware emulations compared to software simulations, but still sufficient to produce robust sWTA functionality. Note that the observed firing rates are higher on the hardware than in the software simulation. This difference is due to the fact that the reliability of the network performance improved for higher firing rates.
Figures 6D,E depict activity profiles of the excitatory neuron layer. The hardware neurons exhibited a broader and also slightly asymmetric excitation profile compared to the software simulation. The asymmetry is likely due to inhomogeneous excitability of neurons, which is caused by fixed-pattern noise (Section 2). The broader excitation profile indicates that inhibition is less efficient on the hardware than in the software simulation (a trend that can also be observed in the firing rates in Figures 6B,C). Counteracting this loss of inhibition may be possible through additional calibration, if the sharpness of the excitation profile is critical for the task in which such an sWTA circuit is to be employed.
The network emulated on Spikey is said to perform sWTA, because the side of the ring with stronger stimulation shows an amplified firing rate, while the firing rate of the other side is suppressed (see Figure 6F). This qualifies our hardware system for applications relying on similar sWTA network topologies.
3.4. Cortical layer 2/3 attractor model
Throughout the past decades, attractor networks that model working memory in the cerebral cortex have gained increasing support from both experimental data and computer simulations. The cortical layer 2/3 attractor memory model described in Lundqvist et al. (2006, 2010) has been remarkably successful at reproducing both low-level (firing patterns, membrane potential dynamics) and high level (pattern completion, attentional blink) features of cortical information processing. One particularly valuable aspect is the very low amount of fine-tuning this model requires in order to reproduce the rich set of desired internal dynamics. It has also been shown in Brüderle et al. (2011) that there are multiple ways of scaling this model down in size without affecting its main functionality features. These aspects make it an ideal candidate for implementation on our analog neuromorphic device. In this context, it becomes particularly interesting to analyze how the strong feedback loops which predominantly determine the characteristic network activity are affected by the imposed limitations of the neuromorphic substrate and fixed-pattern noise. Here, we extend the work done in Brüderle et al. (2011) by investigating specific attractor properties such as firing rates, voltage UP-states, and the pattern completion capability of the network.
3.4.1. Network topology
From a structural perspective, the most prominent feature of the Layer 2/3 Attractor Memory Network is its modularity. Faithful to its biological archetype, it implements a set of cortical hypercolumns, which are in turn subdivided into multiple minicolumns (Figure 7A). Each minicolumn consists of three cell populations: excitatory pyramidal cells, inhibitory basket cells, and inhibitory RSNP (regular spiking non-pyramidal) cells.
Attractor dynamics arise from the synaptic connectivity on two levels. Within a hypercolumn, the basket cell population enables a soft-WTA-like competition among the pyramidal populations within the minicolumns. On a global scale, the long-range inhibition mediated by the RSNP cells governs the competition among so-called patterns, as explained in the following.
In the original model described in Lundqvist et al. (2010), each hypercolumn contains 9 minicolumns, each of which consists of 30 pyramidal, 2 RSNP, and 1 basket cells. Within a minicolumn, the pyramidal cells are interconnected and also project onto the 8 closest basket cells within the same hypercolumn. In turn, pyramidal cells in a minicolumn receive projections from all basket cells within the same hypercolumn. All pyramidal cells receive two types of additional excitatory input: an evenly distributed amount of diffuse Poisson noise and specific activation from the cortical layer 4. Therefore, the minicolumns (i.e., the pyramidal populations within) compete among each other in WTA-like fashion, with the winner being determined by the overall strength of the received input.
A pattern (or attractor) is defined as containing exactly one minicolumn from each hypercolumn. Considering only orthogonal patterns (each minicolumn may only belong to a single pattern) and given that all hypercolumns contain an equal amount of minicolumns, the number of patterns in the network is equal to the number of minicolumns per hypercolumn. Pyramidal cells within each minicolumn project onto the pyramidal cells of all the other minicolumns in the same pattern. These connections ensure a spread of local activity throughout the entire pattern. Additionally, the pyramidal cells also project onto the RSNP cells of all minicolumns belonging to different attractors, which in turn inhibit the pyramidal cells within their minicolumn. This long-range competition enables the winning pattern to completely shut down the activity of all other patterns.
Two additional mechanisms weaken active patterns, thereby facilitating switches between patterns. The pyramidal cells contain an adaptation mechanism which decreases their excitability with every emitted spike. Additionally, the synapses between pyramidal cells are modeled as short-term depressing.
3.4.2. Hardware emulation
When scaling down the original model (2673 neurons) to the maximum size available on the Spikey chip (192 neurons, see Figure 7B for software simulation results), we made use of the essential observation that the number of pyramidal cells can simply be reduced without compensating for it by increasing the corresponding projection probabilities. Also, for less than 8 minicolumns per hypercolumn, all basket cells within a hypercolumn have identical afferent and efferent connectivity patterns, therefore allowing to treat them as a single population. Their total number was decreased, while increasing their efferent projection probabilities accordingly. In general (i.e., except for pyramidal cells), when number and/or size of populations were changed, projection probabilities were scaled in such a way that the total fan-in for each neuron was kept at a constant average. When the maximum fan-in was reached (one afferent synapse for every neuron in the receptive field), the corresponding synaptic weights were scaled up by the remaining factor.
Because neuron and synapse models on the Spikey chip are different to the ones used in the original model, we have performed a heuristic fit in order to approximately reproduce the target firing patterns. Neuron and synapse parameters were first fitted in such a way as to generate clearly discernible attractors with relatively high average firing rates (see Figure 7D). Additional tuning was needed to compensate for missing neuronal adaptation, limitations in hardware configurability, parameter ranges, and fixed-pattern noise affecting hardware parameters.
During hardware emulations, apart from the appearance of spontaneous attractors given only diffuse Poisson stimulation of the network (Figure 7C), we were able to observe two further interesting phenomena which are characteristic for the original attractor model.
When an attractor becomes active, its pyramidal cells enter a so-called UP state which is characterized by an elevated average membrane potential. Figure 7E clearly shows the emergence of such UP-states on hardware. The onset of an attractor is characterized by a steep rise in pyramidal cell average membrane voltage, which then decays toward the end of the attractor due to synaptic short-term depression and/or competition from other attractors temporarily receiving stronger stimulation. On both flanks of an UP state, the average membrane voltage shows a slight undershoot, due to the inhibition by other active attractors.
A second important characteristic of cortical attractor models is their capability of performing pattern completion (Lundqvist et al., 2006). This means that a full pattern can be activated by stimulating only a subset of its constituent pyramidal cells (in the original model, by cells from cortical Layer 4, modeled by us as additional Poisson sources). The appearance of this phenomenon is similar to a phase transition from a resting state to a collective pyramidal UP state occurring when a critical amount of pyramidal cells are stimulated. To demonstrate pattern completion, we have used the same setup as in the previous experiments, except for one pattern receiving additional stimulation. From an initial equilibrium between the three attractors (approximately equal active time), we have observed the expected sharp transition to a state where the stimulated attractor dominates the other two, occurring when one of its four minicolumns received L4 stimulus (Figure 7F).
The implementation of the attractor memory model is a particularly comprehensive showcase of the configurability and functionality of our neuromorphic platform due to the complexity of both model specifications and emergent dynamics. Starting from these results, the next generation hardware (Schemmel et al., 2010) will be able to much more accurately model biological behavior, thanks to a more flexible, adapting neuron model and a significantly increased network size.
3.5. Insect antennal lobe model
The high acceleration factor of the Spikey chip makes it an attractive platform for neuromorphic data processing. Preprocessing of multivariate data is a common problem in signal and data analysis. In conventional computing, reduction of correlation between input channels is often the first step in the analysis of multidimensional data, achieved, e.g., by principal component analysis (PCA). The architecture of the olfactory system maps particularly well onto this problem (Schmuker and Schneider, 2007). We have implemented a network that is inspired by processing principles that have been described in the insect antennal lobe (AL), the first relay station from olfactory sensory neurons to higher brain areas. The function of the AL has been described to decorrelate the inputs from sensory neurons, potentially enabling more efficient memory formation and retrieval (Linster and Smith, 1997; Stopfer et al., 1997; Perez-Orive et al., 2004; Wilson and Laurent, 2005; Schmuker et al., 2011b). The mammalian analog of the AL (the olfactory bulb) has been the target of a recent neuromorphic modeling study (Imam et al., 2012b).
The availability of a network building block that achieves channel decorrelation is an important step toward high-performance neurocomputing. The aim of this experiment is to demonstrate that the previously studied rate-based AL model (Schmuker and Schneider, 2007) that reduces rate correlation between input channels is applicable to a spiking neuromorphic hardware system.
3.5.1. Network topology
In the insect olfactory system, odors are first encoded into neuronal signals by receptor neurons (RNs) which are located on the antenna. RNs send their axons to the AL (Figure 8A). The AL is composed of glomeruli, spherical compartments where RNs project onto local inhibitory neurons (LNs) and projection neurons (PNs). LNs project onto other glomeruli, effecting lateral inhibition. PNs relay the information to higher brain areas where multimodal integration and memory formation takes place.
The architecture of our model reflects the neuronal connectivity in the insect AL (Figure 8A). RNs are modeled as spike train generators, which project onto the PNs in the corresponding glomerulus. The PNs project onto the LNs, which send inhibitory projections to the PNs in other glomeruli.
In biology, the AL network reduces the rate correlation between glomeruli, in order to improve stimulus separability and thus odor identification. Another effect of decorrelation is that the rate patterns encoding the stimuli become sparser, and use the available coding space more efficiently as redundancy is reduced. Our goal was to demonstrate the reduction of rate correlations across glomeruli (channel correlation) by the AL-inspired spiking network. To this end, we generated patterns of firing rates with channel correlation. We created a surrogate data set exhibiting channel correlation using a copula, a technique that allows to generate correlated series of samples from an arbitrary random distribution and a covariance matrix (Nelsen, 1998). The covariance matrix was uniformly set to a target correlation of 0.6. Using this copula, we sampled 100 ten-dimensional data vectors from an exponential distribution. In the biological context, this is equivalent to having a repertoire of 100 odors, each encoded by ten receptors, and the firing rate of each input channel following a decaying exponential distribution. Values larger than e were clipped and the distribution was mapped to the interval [0, 1] by applying v = v/e for each value v. These values were then converted into firing rates between 20 and 55 spikes/s. The ten-dimensional data vector was presented to the network by mapping the ten firing rates onto the ten glomeruli, setting all single RNs in each glomerulus to fire at the respective target rates. Rates were converted to spike trains individually for each RN using the Gamma process with γ = 5. Each data vector was presented to the network for the duration of 1 s by making the RNs of each glomerulus fire with the specified rate. The inhibitory weights between glomeruli were uniform, i.e., all inhibitory connections shared the same weight. During 1 s of stimulus presentation, output rates were measured from PNs. One output rate per glomerulus was obtained by averaging the firing rate of all PNs in a glomerulus.
We have used 6 RN input streams per glomerulus, projecting in an all-to-all fashion onto 7 PNs, which in turn projected on 3 LNs per glomerulus.
3.5.2. Hardware emulation
The purpose of the presented network was to reduce rate correlation between input channels. As in other models, fixed-pattern noise across neurons had a detrimental effect on the function of the network. We exploited the specific structure of our network to implement more efficient calibration than can be provided by standard calibration methods (Section 2.4). Our calibration algorithm targeted PNs and LNs in the first layer of the network. During calibration, we turned off all projections between glomeruli. Its aim was to achieve a homogeneous response across PNs and LNs respectively, i.e., within ±10% of a target rate. The target rate was chosen from the median response rate of uncalibrated neurons. For neurons whose response rate was too high it was sufficient to reduce the synaptic weight of the excitatory input from RNs. For those neurons with a too low rate the input strength had to be increased. The excitatory synaptic weight of the input from RNs was initially already at its maximum value and could not be increased. As a workaround we used PNs from the same glomerulus to add additional excitatory input to those “weak” neurons. We ensured that no recurrent excitatory loops were introduced by this procedure. If all neurons in a glomerulus were too weak, we recruit another external input stream to achieve the desired target rate. Once the PNs were successfully calibrated (less than 10% deviation from the target rate), we used the same approach to calibrate the LNs in each glomerulus.
To assess the performance of the network we have compared the channel correlation in the input and in the output. The channel correlation matrix C was computed according to
(5) |
with dPearson (•, •) the Pearson correlation coefficient between two vectors. For the input correlation matrix Cinput, the vector νglom.i contained the average firing rates of the six RNs projecting to the ith glomerulus, with each element of this vector for one stimulus presentation. For the output correlation matrix Coutput we used the rates from PNs instead of RNs. Thus, we obtained 10 × 10 matrices containing the rate correlations for each pair of input or output channels.
Figure 8B depicts the correlation matrix Cinput for the input firing rates. When no lateral inhibition is present, Cinput matches Coutput (Figure 8C). We have systematically varied the strength of lateral inhibition by scaling all inhibitory weights by a factor q, with q = 0 for zero lateral inhibition and q = 1 for inhibition set to its maximal strength. With increasing lateral inhibition, off-diagonal values in Coutput approach zero and output channel correlation is virtually gone (Figure 8D). The amount of residual correlation to be present in the output can be controlled by adjusting the strength of lateral inhibition (Figure 8E).
Taken together, we demonstrated the implementation of an olfaction-inspired network to remove correlation between input channels on the Spikey chip. This network can serve as a preprocessing module for data analysis applications to be implemented on the Spikey chip. An interesting candidate for such an application is a spiking network for supervised classification, which may benefit strongly from reduced channel correlations for faster learning and better discrimination (Häusler et al., 2011).
3.6. Liquid state machine
Liquid state machines (LSMs) as proposed by Maass et al. (2002) and Jaeger (2001) provide a generic framework for computation on continuous input streams. The liquid, a recurrent network, projects an input into a high-dimensional space which is subsequently read out. It has been proven that LSMs have universal computational power for computations with fading memory on functions of time (Maass et al., 2002). In the following, we show that classification performance of an LSM emulated on our hardware is comparable to the corresponding computer simulation. Synaptic weights of the readout are iteratively learned on-chip, which inherently compensates for fixed-pattern noise. A trained system can then be used as an autonomous and very fast spiking classifier.
3.6.1. Network topology
The LSM consists of two major components: the recurrent liquid network itself and a spike-based classifier (Figure 9A). A general purpose liquid needs to meet the separation property (Maass et al., 2002), which requires that different inputs are mapped to different outputs, for a wide range of possible inputs. Therefore, we use a network topology similar to the one proposed by Bill et al. (2010). It consists of an excitatory and inhibitory population with a ratio of 80:20 excitatory to inhibitory neurons. Both populations have recurrent as well as feedforward connections. Each neuron in the liquid receives 4 inputs from the 32 excitatory and 32 inhibitory sources, respectively. All other connection probabilities are illustrated in Figure 9.
The readout is realized by means of a tempotron (Gütig and Sompolinsky, 2006), which is compatible with our hardware due to its spike-based nature. Furthermore, its modest single neuron implementation leaves most hardware resources to the liquid. The afferent synaptic weights are trained with the method described in Gütig and Sompolinsky (2006), which effectively implements gradient descent dynamics. Upon training, the tempotron distinguishes between two input classes by emitting either one or no spike within a certain time window. The former is artificially enforced by blocking all further incoming spikes after the first spike occurrence.
The PSP kernel of a LIF neuron with current-based synapses is given by
(6) |
with the membrane time constant τm and the synaptic time constant τs, respectively. Here, A denotes a constant PSP scaling factor, ti the time of the ith incoming spike and Θ(t) the Heaviside step function.
During learning, weights are updated as follows
(7) |
where is the weight update corresponding to the jth afferent neuron after the nth learning iteration with learning rate α(n). The spike time of the tempotron, or otherwise the time of highest membrane potential, is denoted with tmax. In other words, for trials where an erroneous spike was elicited, the excitatory afferents with a causal contribution to this spike are weakened and inhibitory ones are strengthened according to equation (7). In case the tempotron did not spike even though it should have, the weights are modulated the other way round, i.e., excitatory weights are strengthened and inhibitory ones are weakened. This learning rule has been implemented on hardware with small modifications, due to the conductance-based nature of the hardware synapses (see below).
The tempotron is a binary classifier, hence any task needs to be mapped to a set of binary decisions. Here, we have chosen a simple binary task adapted from Maass et al. (2002), to evaluate the performance of the LSM. The challenge was to distinguish spike train segments in a continuous data stream composed of two templates with identical rates (denoted X and Y in Figure 9A). In order to generate the input, we cut the template spike trains into segments of 50 ms duration. We then composed the spike sequence to be presented to the network by randomly picking a spike segment from either X or Y in each time window (see Figure 9 for a schematic). Additionally, we added spike timing jitter from a normal distribution with a standard deviation of σ = 1 ms to each spike. For each experiment run, both for training and evaluation, the composed spike sequence was then streamed into the liquid. Tempotrons were given the liquid activity as input and trained to identify whether the segment within the previous time window originated from sequence X or Y. In a second attempt, we trained the tempotron to identify the origin of the pattern presented in the window at −100 to −150 ms (that is, the second to the last window). Not only did this task allow to determine the classification capabilities of the LSM, but it also put the liquid’s fading memory to the test, as classification of a segment further back in time becomes increasingly difficult.
3.6.2. Hardware emulation
The liquid itself does not impose any strong requirements on the hardware since virtually any network is suitable as long as the separation property is satisfied. We adapted a network from Bill et al. (2010) which, in a similar form, had already been implemented on our hardware. However, STP was disabled, because at the time of the experiment it was not possible to exclusively enable STP for the liquid without severely affecting the performance of the tempotron.
The hardware implementation of the tempotron required more attention, since only conductance-based synapses are available. The dependence of spike efficacies on the actual membrane potential was neglected, because the rest potential was chosen to be close to the firing threshold, with the reversal potentials far away. However, the asymmetric distance of excitatory and inhibitory reversal potentials from the sub-threshold regime needed compensation. This was achieved by scaling all excitatory weights by where corresponds to the mean neuron membrane voltage and Eexc/Einh is the excitatory/inhibitory reversal potentials. Discontinuities in spike efficacies for synapses changing from excitatory to inhibitory or vice versa were avoided by prohibiting such transitions. Finally, membrane potential shunting after the first spike occurrence is neither possible on our hardware nor very biological and had therefore been neglected, as already proposed by Gütig and Sompolinsky (2006).
Even though the tempotron was robust against fixed-pattern noise due to on-chip learning, the liquid required modifications. Therefore, firing thresholds were tuned independently in software and hardware to optimize the memory capacity and avoid violations of the separation property. Since hardware neurons share firing thresholds, the tempotron was affected accordingly (see Table 1). Additionally, the learning curve α(n) was chosen individually for software and hardware due to the limited resolution of synaptic weights on the latter.
The results for software and hardware implementations are illustrated in Figure 9B. Both LSMs performed at around 90% classification correctness for the spiketrain segment that lied 50–100 ms in the past with respect to the end of the stimulus. For inputs lying even further away in time, performances dropped to chance level (50% for a binary task), independent of the simulation back-end.
Regarding the classification capabilities of the LSM, our current implementation allows a large variety of tasks to be performed. Currently, e.g., we are working on hand-written digit recognition with the very same setup on the Spikey chip. Even without a liquid, our implementation of the tempotron (or populations thereof) makes an excellent neuromorphic classifier, given its bandwidth-friendly sparse response and robustness against fixed-pattern noise.
4. Discussion
We have successfully implemented a variety of neural microcircuits on a single universal neuromorphic substrate, which is described in detail by Schemmel et al. (2006). All networks show activity patterns qualitatively and to some extent also quantitatively similar to those obtained by software simulations. The corresponding reference models found in literature have not been modified significantly and network topologies have been identical for hardware emulation and software simulation, if not stated otherwise. In particular, the emulations benefit from the advantages of our neuromorphic implementation, namely inherent parallelism and accelerated operation compared to software simulations on conventional von-Neumann machines. Previous accounts of networks implemented on the Spikey system include computing with high-conductance states (Kaplan et al., 2009), self-stabilizing recurrent networks (Bill et al., 2010), and simple emulations of cortical layer 2/3 attractor networks (Brüderle et al., 2011).
In this contribution, we have presented a number of new networks and extensions of previous implementations. Our synfire chain implementation achieves reliable signal propagation over years of biological time from one single stimulation, while synchronizing and filtering these signals (Section 3.1). Our extension of the network from Bill et al. (2010) to exhibit asynchronous irregular firing behavior is an important achievement in the context of reproducing stochastic activity patterns found in cortex (Section 3.2). We have realized soft winner-take-all networks on our hardware system (Section 3.3), which are essential building blocks for many cortical models involving some kind of attractor states [e.g., the decision-making model by Soltani and Wang (2010)]. The emulated cortical attractor model provides an implementation of working memory for computation with cortical columns (Section 3.4). Additionally, we have used the Spikey system for preprocessing of multivariate data inspired by biological archetypes (Section 3.5) and machine learning (Section 3.6). Most of these networks allocate the full number of neurons receiving input from one synapse array on the Spikey chip, but with different sets of neuron and synapse parameters and especially vastly different connectivity patterns, thereby emphasizing the remarkable configurability of our neuromorphic substrate.
However, the translation of such models requires modifications to allow execution on our hardware. The most prominent cause for such modifications is fixed-pattern noise across analog hardware neurons and synapses. In most cases, especially when population rate coding is involved, it is sufficient to compensate for this variability by averaging spiking activity over many neurons. For the data decorrelation and machine learning models, we have additionally trained the synaptic weights on the chip to achieve finer equilibration of the variability at critical network nodes. Especially when massive downscaling is required in order for models to fit onto the substrate, fixed-pattern noise presents an additional challenge because the same amount of information needs to be encoded by fewer units. For this reason, the implementation of the cortical attractor memory network required additional heuristic activity fitting procedures.
The usability of the Spikey system, especially for neuroscientists with no neuromorphic engineering background, is provided by an integrated development environment. We envision that the configurability made accessible by such a software environment will encourage a broader neuroscience community to use our hardware system. Examples of use would be the acceleration of simulations as well as the investigation of the robustness of network models against parameter variability, both between computational units and between trials, as, e.g., published by Brüderle et al. (2010) and Schmuker et al. (2011a). The hardware system can be efficiently used without knowledge about the hardware implementation on transistor level. Nevertheless, users have to consider basic hardware constraints, as, e.g., shared parameters. Networks can be developed using the PyNN metalanguage and optionally be prototyped on software simulators before running on the Spikey system (Brüderle et al., 2009; Davison et al., 2009). This rather easy configuration and operation of the Spikey chip allows the implementation of many other neural network models.
There exist also boundaries to the universal applicability of our hardware system. One limitation inherent to this type of neuromorphic device is the choice of implemented models for neuron and synapse dynamics. Models requiring, e.g., neuronal adaptation or exotic synaptic plasticity rules are difficult, if not impossible to be emulated on this substrate. Also, the total number of neurons and synapses set a hard upper bound on the size of networks that can be emulated. However, the next generation of our highly accelerated hardware system will increase the number of available neurons and synapses by a factor of 103, and provide extended configurability for each of these units (Schemmel et al., 2010).
The main purpose of our hardware system is to provide a flexible platform for highly accelerated emulation of spiking neuronal networks. Other research groups pursue different design goals for their hardware systems. Some focus on dedicated hardware providing specific network topologies (e.g., Merolla and Boahen, 2006; Chicca et al., 2007), or comprising few neurons with more complex dynamics (e.g., Chen et al., 2010; Grassia et al., 2011; Brink et al., 2012). Others develop hardware systems of comparable configurability, but operate in biological real-time, mostly using off-chip communication (Vogelstein et al., 2007; Choudhary et al., 2012). Purely digital systems (Merolla et al., 2011; Furber et al., 2012; Imam et al., 2012a) and field-programmable analog arrays (FPAA; Basu et al., 2010) provide even more flexibility in configuration than our system, but have much smaller acceleration factors.
With the ultimate goal of brain size emulations, there exists a clear requirement for increasing the size and complexity of neuromorphic substrates. An accompanying upscaling of the fitting and calibration procedures presented here appears impractical for such orders of magnitude and can only be done for a small subset of components. Rather, it will be essential to step beyond simulation equivalence as a quality criterion for neuromorphic computing, and to develop a theoretical framework for circuits that are robust against, or even exploit the inherent imperfections of the substrate for achieving the required computational functions.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
We would like to thank Dan Husmann, Stefan Philipp, Bernhard Kaplan, and Moritz Schilling for their essential contributions to the neuromorphic platform, Johannes Bill, Jens Kremkow, Anders Lansner, Mikael Lundqvist, and Emre Neftci for assisting with the hardware implementation of the network models, Oliver Breitwieser for data analysis and Venelin Petkov for characterization measurements of the Spikey chip. The research leading to these results has received funding by the European Union 6th and 7th Framework Programme under grant agreement no. 15879 (FACETS), no. 269921 (BrainScaleS), and no. 243914 (Brain-i-Nets). Michael Schmuker has received support from the German ministry for research and education (BMBF) to Bernstein Center for Computational Neuroscience Berlin (grant no. 01GQ1001D) and from Deutsche Forschungsgemeinschaft within SPP 1392 (grant no. SCHM 2474/1-1). The main contributors for each section are denoted with author initials: neuromorphic system (Andreas Grübl and Eric Müller); synfire chain (Paul Müller); balanced random network (Thomas Pfeil); soft winner-take-all network (Thomas Pfeil); cortical attractor model (Mihai A. Petrovici); antennal lobe model (Michael Schmuker); liquid state machine (Sebastian Jeltsch). All authors contributed to writing this paper.
References
- Abeles M., Hayon G., Lehmann D. (2004). Modeling compositionality by dynamic binding of synfire chains. J. Comput. Neurosci. 17, 179–201 10.1023/B:JCNS.0000037682.18051.5f [DOI] [PubMed] [Google Scholar]
- Aertsen A., Diesmann M., Gewaltig M.-O. (1996). Propagation of synchronous spiking activity in feedforward neural networks. J. Physiol. (Paris) 90, 243–247 10.1016/S0928-4257(97)81432-5 [DOI] [PubMed] [Google Scholar]
- Akay M. (2007). Handbook of Neural Engineering – Part II: Neuro-Nanotechnology: Artificial Implants and Neural Prosthesis. New York: Wiley-IEEE Press [Google Scholar]
- Aviel Y., Mehring C., Abeles M., Horn D. (2003). On embedding synfire chains in a balanced network. Neural Comput. 15, 1321–1340 10.1162/089976603321780290 [DOI] [PubMed] [Google Scholar]
- Baker S. N., Spinks R., Jackson A., Lemon R. N. (2001). Synchronization in monkey motor cortex during a precision grip task. I. Task-dependent modulation in single-unit synchrony. J. Neurophysiol. 85, 869–885 [DOI] [PubMed] [Google Scholar]
- Basu A., Ramakrishnan S., Petre C., Koziol S., Brink S., Hasler P. (2010). Neural dynamics in reconfigurable silicon. IEEE Trans. Biomed. Circuits Syst. 4, 311–319 10.1109/TBCAS.2010.2051224 [DOI] [PubMed] [Google Scholar]
- Bi G., Poo M. (1998). Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18, 10464–10472 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bill J., Schuch K., Brüderle D., Schemmel J., Maass W., Meier K. (2010). Compensating inhomogeneities of neuromorphic VLSI devices via short-term synaptic plasticity. Front. Comput. Neurosci. 4:129. 10.3389/fncom.2010.00129 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brink S., Nease S., Hasler P., Ramakrishnan S., Wunderlich R., Basu A., et al. (2012). A learning-enabled neuron array IC based upon transistor channel models of biological phenomena. IEEE Trans. Biomed. Circuits Syst. PP, 1. 10.1109/TBCAS.2012.2197858 [DOI] [PubMed] [Google Scholar]
- Brüderle D., Bill J., Kaplan B., Kremkow J., Meier K., Müller E., et al. (2010). “Simulator-like exploration of cortical network architectures with a mixed-signal VLSI system,” in Proceedings of the 2010 International Symposium on Circuits and Systems (ISCAS) (Paris: IEEE Press). [Google Scholar]
- Brüderle D., Müller E., Davison A., Muller E., Schemmel J., Meier K. (2009). Establishing a novel modeling tool: a Python-based interface for a neuromorphic hardware system. Front. Neuroinform. 3:17. 10.3389/neuro.11.017.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brüderle D., Petrovici M., Vogginger B., Ehrlich M., Pfeil T., Millner S., et al. (2011). A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems. Biol. Cybern. 104, 263–296 10.1007/s00422-011-0435-9 [DOI] [PubMed] [Google Scholar]
- Brunel N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J. Comput. Neurosci. 8, 183–208 10.1023/A:1008925309027 [DOI] [PubMed] [Google Scholar]
- Carnevale T., Hines M. (2006). The NEURON Book. Cambridge: Cambridge University Press; 10.1017/CBO9780511541612 [DOI] [Google Scholar]
- Chen H., Saïghi S., Buhry L., Renaud S. (2010). Real-time simulation of biologically realistic stochastic neurons in VLSI. IEEE Trans. Neural Netw. 21, 1511–1517 10.1109/TNN.2010.2049028 [DOI] [PubMed] [Google Scholar]
- Chicca E., Whatley A. M., Lichtsteiner P., Dante V., Delbruck T., Del Giudice P., et al. (2007). A multichip pulse-based neuromorphic infrastructure and its application to a model of orientation selectivity. IEEE Trans. Circuits Syst. I Regul. Pap. 54, 981–993 10.1109/TCSI.2007.893509 [DOI] [Google Scholar]
- Choudhary S., Sloan S., Fok S., Neckar A., Trautmann E., Gao P., et al. (2012). “Silicon neurons that compute,” in Artificial Neural Networks and Machine Learning – ICANN 2012, Vol. 7552 of Lecture Notes in Computer Science, eds Villa A., Duch W., Erdi P., Masulli F., Palmpages G. (Berlin: Springer; ), 121–128 10.1007/978-3-642-33269-2_16 [DOI] [Google Scholar]
- Davison A., Brüderle D., Eppler J. M., Kremkow J., Muller E., Pecevski D., et al. (2009). PyNN: a common interface for neuronal network simulators. Front. Neuroinform. 2:11. 10.3389/neuro.11.011.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davison A., Muller E., Brüderle D., Kremkow J. (2010). A common language for neuronal networks in software and hardware. Neuromorph. Eng. 10.2417/1201001.1712 [DOI] [Google Scholar]
- Dayan P., Abbott L. F. (2001). Theoretical Neuroscience. Cambridge: MIT Press; 10.1016/S0306-4522(00)00552-2 [DOI] [Google Scholar]
- Diesmann M., Gewaltig M.-O., Aertsen A. (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature 402, 529–533 10.1038/990101 [DOI] [PubMed] [Google Scholar]
- Douglas R. J., Martin K. A. C. (2004). Neuronal circuits of the neocortex. Annu. Rev. Neurosci. 27, 419–451 10.1146/annurev.neuro.27.070203.144152 [DOI] [PubMed] [Google Scholar]
- Eppler J. M., Helias M., Muller E., Diesmann M., Gewaltig M. (2009). PyNEST: a convenient interface to the NEST simulator. Front. Neuroinform. 2:12. 10.3389/neuro.11.012.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Farquhar E., Hasler P. (2005). A bio-physically inspired silicon neuron. IEEE Trans. Circuits Syst. I Regul. Pap. 52, 477–488 10.1109/TCSI.2004.842871 [DOI] [Google Scholar]
- Fieres J., Grübl A., Philipp S., Meier K., Schemmel J., Schürmann F. (2004). “A platform for parallel operation of VLSI neural networks,” in Proceedings of the 2004 Brain Inspired Cognitive Systems Conference (BICS) (Stirling: University of Stirling). [Google Scholar]
- Furber S., Lester D., Plana L., Garside J., Painkras E., Temple S., et al. (2012). Overview of the spinnaker system architecture. IEEE Trans. Comput. PP, 1. 10.1109/TC.2012.142 [DOI] [Google Scholar]
- Gao P., Benjamin B., Boahen K. (2012). Dynamical system guided mapping of quantitative neuronal models onto neuromorphic hardware. IEEE Trans. Circuits Syst. I Regul. Pap. 59, 2383–2394 10.1109/TCSI.2012.2188956 [DOI] [Google Scholar]
- Gewaltig M.-O., Diesmann M. (2007). NEST (neural simulation tool). Scholarpedia 2, 1430. 10.4249/scholarpedia.1430 [DOI] [Google Scholar]
- Grassia F., Buhry L., Lévi T., Tomas J., Destexhe A., Saïghi S. (2011). Tunable neuromimetic integrated system for emulating cortical neuron models. Front. Neurosci. 5:134. 10.3389/fnins.2011.00134 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grossberg S. (1973). Contour enhancement, short term memory, and constancies in reverberating neural networks. Stud. Appl. Math. 52, 213–257 [Google Scholar]
- Gupta A., Wang Y., Markram H. (2000). Organizing principles for a diversity of GABAergic interneurons and synapses in the neocortex. Science 287, 273–278 10.1126/science.287.5451.273 [DOI] [PubMed] [Google Scholar]
- Gütig R., Sompolinsky H. (2006). The tempotron: a neuron that learns spike timing-based decisions. Nat. Neurosci. 9, 420–428 10.1038/nn1643 [DOI] [PubMed] [Google Scholar]
- Häusler C., Nawrot M. P., Schmuker M. (2011). “A spiking neuron classifier network with a deep architecture inspired by the olfactory system of the honeybee,” in Proceedings of the 2011 5th International IEEE/EMBS Conference on Neural Engineering (Cancun: IEEE Press), 198–202 [Google Scholar]
- Hines M., Davison A. P., Muller E. (2009). NEURON and Python. Front. Neuroinform. 3:1. 10.3389/neuro.11.001.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Imam N., Akopyan F., Arthur J., Merolla P., Manohar R., Modha D. (2012a). “A digital neurosynaptic core using event-driven QDI circuits,” in IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC) (Lyngby: IEEE Press), 25–32 [Google Scholar]
- Imam N., Cleland T. A., Manohar R., Merolla P. A., Arthur J. V., Akopyan F., et al. (2012b). Implementation of olfactory bulb glomerular layer computations in a digital neurosynaptic core. Front. Neurosci. 6:83. 10.3389/fnins.2012.00083 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Indiveri G., Chicca E., Douglas R. (2006). A VLSI array of low-power spiking neurons and bistable synapses with spike-timing dependent plasticity. IEEE Trans. Neural Netw. 17, 211–221 10.1109/TNN.2005.860850 [DOI] [PubMed] [Google Scholar]
- Indiveri G., Chicca E., Douglas R. (2009). Artificial cognitive systems: from VLSI networks of spiking neurons to neuromorphic cognition. Cognit. Comput. 1, 119–127 10.1007/s12559-008-9003-6 [DOI] [Google Scholar]
- Indiveri G., Linares-Barranco B., Hamilton T. J., van Schaik A., Etienne-Cummings R., Delbruck T., et al. (2011). Neuromorphic silicon neuron circuits. Front. Neurosci. 5:73. 10.3389/fnins.2011.00073 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Itti L., Koch C. (2001). Computational modeling of visual attention. Nat. Rev. Neurosci. 2, 194–203 10.1038/35058500 [DOI] [PubMed] [Google Scholar]
- Jaeger H. (2001). The “Echo State” Approach to analysing and Training Recurrent Neural Networks. Technical Report GMD Report 148. German National Research Center for Information Technology, St. Augustine [Google Scholar]
- Kaplan B., Brüderle D., Schemmel J., Meier K. (2009). “High-conductance states on a neuromorphic hardware system,” in Proceedings of the 2009 International Joint Conference on Neural Networks (IJCNN) (Atlanta: IEEE Press), 1524–1530 10.1109/IJCNN.2009.5178951 [DOI] [Google Scholar]
- Kremkow J., Aertsen A., Kumar A. (2010a). Gating of signal propagation in spiking neural networks by balanced and correlated excitation and inhibition. J. Neurosci. 30, 15760–15768 10.1523/JNEUROSCI.3874-10.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kremkow J., Perrinet L. U., Masson G. S., Aertsen A. (2010b). Functional consequences of correlated excitatory and inhibitory conductances in cortical networks. J. Comput. Neurosci. 28, 579–594 10.1007/s10827-010-0240-9 [DOI] [PubMed] [Google Scholar]
- Kumar A., Schrader S., Aertsen A., Rotter S. (2008). The high-conductance state of cortical networks. Neural Comput. 20, 1–43 10.1162/neco.2008.20.1.1 [DOI] [PubMed] [Google Scholar]
- Lazzaro J., Ryckebusch S., Mahowald M., Mead C. (1988). “Winner-take-all networks of O(N) complexity,” in Advances in Neural Information Processing Systems (NIPS) (Denver: Morgan Kaufmann; ), 703–711 [Google Scholar]
- Linster C., Smith B. H. (1997). A computational model of the response of honey bee antennal lobe circuitry to odor mixtures: overshadowing, blocking and unblocking can arise from lateral inhibition. Behav. Brain Res. 87, 1–14 10.1016/S0166-4328(96)02271-1 [DOI] [PubMed] [Google Scholar]
- Litvak V., Sompolinsky H., Segev I., Abeles M. (2003). On the transmission of rate code in long feed-forward networks with excitatory-inhibitory balance. J. Neurosci. 23, 3006–3015 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lundqvist M., Compte A., Lansner A. (2010). Bistable, irregular firing and population oscillations in a modular attractor memory network. PLoS Comput. Biol. 6:e1000803. 10.1371/journal.pcbi.1000803 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lundqvist M., Rehn M., Djurfeldt M., Lansner A. (2006). Attractor dynamics in a modular network model of neocortex. Network 17, 253–276 10.1080/09548980600774619 [DOI] [PubMed] [Google Scholar]
- Maass W. (2000). On the computational power of winner-take-all. Neural Comput. 12, 2519–2535 10.1162/089976600300014827 [DOI] [PubMed] [Google Scholar]
- Maass W., Natschläger T., Markram H. (2002). Real-time computing without stable states: a new framework for neural compuation based on perturbation. Neural Comput. 14, 2531–2560 10.1162/089976602760407955 [DOI] [PubMed] [Google Scholar]
- Marder E., Goaillard J.-M. (2006). Variability, compensation and homeostasis in neuron and network function. Nat. Rev. Neurosci. 7, 563–574 10.1038/nrn1949 [DOI] [PubMed] [Google Scholar]
- Markram H., Wang Y., Tsodyks M. (1998). Differential signaling via the same axon of neocortical pyramidal neurons. Proc. Natl. Acad. Sci. U.S.A. 95, 5323–5328 10.1073/pnas.95.9.5323 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mead C. (1989). Analog VLSI and Neural Systems. Boston, MA: Addison-Wesley; 10.1007/978-1-4613-1639-8 [DOI] [Google Scholar]
- Merolla P., Arthur J., Akopyan F., Imam N., Manohar R., Modha D. (2011). “A digital neurosynaptic core using embedded crossbar memory with 45pJ per spike in 45nm,” in Proceedings of the 2011 Custom Integrated Circuits Conference (CICC) (San Jose: IEEE Press), 1–4 10.1109/CICC.2011.6055294 [DOI] [Google Scholar]
- Merolla P., Boahen K. (2006). “Dynamic computation in a recurrent network of heterogeneous silicon neurons,” in Proceedings of the 2006 International Symposium on Circuits and Systems (ISCAS) (Island of Kos: IEEE Press), 4539–4542 10.1109/ISCAS.2006.1693639 [DOI] [Google Scholar]
- Neftci E., Chicca E., Indiveri G., Douglas R. (2011). A systematic method for configuring VLSI networks of spiking neurons. Neural Comput. 23, 2457–2497 10.1162/NECO_a_00182 [DOI] [PubMed] [Google Scholar]
- Neftci E., Indiveri G. (2010). “A device mismatch compensation method for VLSI neural networks,” in Proceedings of the 2010 Biomedical Circuits and Systems Conference (BioCAS) (San Diego: IEEE Press), 262–265 10.1109/BIOCAS.2010.5709621 [DOI] [Google Scholar]
- Nelsen R. B. (1998). An Introduction to Copulas. (Lecture Notes in Statistics), 1st Edn New York: Springer [Google Scholar]
- Oster M., Douglas R., Liu S. (2009). Computation with spikes in a winner-take-all network. Neural Comput. 21, 2437–2465 10.1162/neco.2009.07-08-829 [DOI] [PubMed] [Google Scholar]
- Perez-Orive J., Bazhenov M., Laurent G. (2004). Intrinsic and circuit properties favor coincidence detection for decoding oscillatory input. J. Neurosci. 24, 6037–6047 10.1523/JNEUROSCI.1084-04.2004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Perkel D. H., Gerstein G. L., Moore G. P. (1967). Neuronal spike trains and stochastic point processes. II. Simultaneous spike trains. Biophys. J. 7, 419–440 10.1016/S0006-3495(67)86596-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pfeil T., Potjans T. C., Schrader S., Potjans W., Schemmel J., Diesmann M., et al. (2012). Is a 4-bit synaptic weight resolution enough? – constraints on enabling spike-timing dependent plasticity in neuromorphic hardware. Front. Neurosci. 6:90. 10.3389/fnins.2012.00090 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Prut Y., Vaadia E., Bergman H., Haalman I., Hamutal S., Abeles M. (1998). Spatiotemporal structure of cortical activity: properties and behavioral relevance. J. Neurophysiol. 79, 2857–2874 [DOI] [PubMed] [Google Scholar]
- Renaud S., Tomas J., Lewis N., Bornat Y., Daouzli A., Rudolph M., et al. (2010). PAX: a mixed hardware/software simulation platform for spiking neural networks. Neural Netw. 23, 905–916 10.1016/j.neunet.2010.02.006 [DOI] [PubMed] [Google Scholar]
- Rolls E. T., Deco G. (2010). The Noisy Brain: Stochastic Dynamics as a Principle. Oxford: Oxford University Press; 10.1093/acprof:oso/9780199587865.001.0001 [DOI] [Google Scholar]
- Schemmel J., Brüderle D., Grübl A., Hock M., Meier K., Millner S. (2010). “A wafer-scale neuromorphic hardware system for large-scale neural modeling,” in Proceedings of the 2010 International Symposium on Circuits and Systems (ISCAS) (Paris: IEEE Press), 1947–1950 10.1109/ISCAS.2010.5536970 [DOI] [Google Scholar]
- Schemmel J., Brüderle D., Meier K., Ostendorf B. (2007). “Modeling synaptic plasticity within networks of highly accelerated I&F neurons,” in Proceedings of the 2007 International Symposium on Circuits and Systems (ISCAS) (New Orleans: IEEE Press), 3367–3370 10.1109/ISCAS.2007.378289 [DOI] [Google Scholar]
- Schemmel J., Grübl A., Meier K., Müller E. (2006). “Implementing synaptic plasticity in a VLSI spiking neural network model,” in Proceedings of the 2006 International Joint Conference on Neural Networks (IJCNN) (Vancouver: IEEE Press), 1–6 10.1109/IJCNN.2006.246651 [DOI] [Google Scholar]
- Schmuker M., Brüderle D., Schrader S., Nawrot M. P. (2011a). “Ten thousand times faster: classifying multidimensional data on a spiking neuromorphic hardware system. Front. Comput. Neurosci. Conference Abstract: BC11: Computational Neuroscience & Neurotechnology Bernstein Conference & Neurex Annual Meeting 2011 109. 10.3389/conf.fncom.2011.53.00109 [DOI] [Google Scholar]
- Schmuker M., Yamagata N., Nawrot M. P., Menzel R. (2011b). Parallel Representation of Stimulus Identity and Intensity in a Dual Pathway Model Inspired by the Olfactory System of the Honeybee. Front. Neuroeng. 4:17. 10.3389/fneng.2011.00017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schmuker M., Schneider G. (2007). Processing and classification of chemical data inspired by insect olfaction. Proc. Natl. Acad. Sci. U.S.A. 104, 20285–20289 10.1073/pnas.0705683104 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schrader S., Diesmann M., Morrison A. (2010). A compositionality machine realized by a hierarchic architecture of synfire chains. Front. Comput. Neurosci. 4:154. 10.3389/fncom.2010.00154 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schrader S., Grün S., Diesmann M., Gerstein G. (2008). Detecting synfire chain activity using massively parallel spike train recording. J. Neurophysiol. 100, 2165–2176 10.1152/jn.01245.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Serrano-Gotarredona R., Oster M., Lichtsteiner P., Linares-Barranco A., Paz-Vicente R., Gómez-Rodríguez F., et al. (2006). “AER building blocks for multi-layer multi-chip neuromorphic vision systems,” in Advances in Neural Information Processing Systems, Vol. 18, eds Weiss Y., Schölkopf B., Platt J. (Cambridge, MA: MIT Press; ), 1217–1224 [Google Scholar]
- Soltani A., Wang X.-J. (2010). Synaptic computation underlying probabilistic inference. Nat. Neurosci. 13, 112–119 10.1038/nn.2450 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Song S., Miller K. D., Abbott L. F. (2000). Competitive Hebbian learning through {spike-timing-dependent} synaptic plasticity. Nat. Neurosci. 3, 919–926 10.1038/78829 [DOI] [PubMed] [Google Scholar]
- Stopfer M., Bhagavan S., Smith B. H., Laurent G. (1997). Impaired odour discrimination on desynchronization of odour-encoding neural assemblies. Nature 390, 70–74 10.1038/36335 [DOI] [PubMed] [Google Scholar]
- Tetzlaff T., Morrison A., Timme M., Diesmann M. (2005). “Heterogeneity breaks global synchrony in large networks,” in Proceedings of the 30th Göttingen Neurobiology Conference, Göottingen [Google Scholar]
- Tsodyks M. V., Markram H. (1997). The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc. Natl. Acad. Sci. U.S.A. 94, 719–723 10.1073/pnas.94.2.719 [DOI] [PMC free article] [PubMed] [Google Scholar]
- van Vreeswijk C., Sompolinsky H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274, 1724–1726 10.1126/science.274.5293.1724 [DOI] [PubMed] [Google Scholar]
- Vogels T. P., Abbott L. F. (2005). Signal propagation and logic gating in networks of integrate-and-fire neurons. J. Neurosci. 25, 10786–10795 10.1523/JNEUROSCI.3508-05.2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vogelstein R., Mallik U., Vogelstein J., Cauwenberghs G. (2007). Dynamically reconfigurable silicon array of spiking neurons with conductance-based synapses. IEEE Trans. Neural Netw. 18, 253–265 10.1109/TNN.2006.883007 [DOI] [PubMed] [Google Scholar]
- Wilson R. I., Laurent G. (2005). Role of GABAergic inhibition in shaping odor-evoked spatiotemporal patterns in the drosophila antennal lobe. J. Neurosci. 25, 9069–9079 10.1523/JNEUROSCI.1744-05.2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yazdanbakhsh A., Babadi B., Rouhani S., Arabzadeh E., Abbassian A. (2002). New attractor states for synchronous activity in synfire chains with excitatory and inhibitory coupling. Biol. Cybern. 86, 367–378 10.1007/s00422-001-0293-y [DOI] [PubMed] [Google Scholar]