Abstract
We consider the open issue of how the energy efficiency of the neural information transmission process, in a general neuronal array, constrains the network size, and how well this network size ensures the reliable transmission of neural information in a noisy environment. By direct mathematical analysis, we have obtained general solutions proving that there exists an optimal number of neurons in the network, where the average coding energy cost (defined as energy consumption divided by mutual information) per neuron passes through a global minimum for both subthreshold and superthreshold signals. With increases in background noise intensity, the optimal neuronal number decreases for subthreshold signals and increases for suprathreshold signals. The existence of an optimal number of neurons in an array network reveals a general rule for population coding that states that the neuronal number should be large enough to ensure reliable information transmission that is robust to the noisy environment but small enough to minimize energy cost.
Neuronal activity, related to information processing in brain circuits, is metabolically expensive1. For example, the metabolic cost of the human brain may rise from 20% to 40% of whole-body energy production when human beings switch from a resting state to a working state. Action potentials, which are electrical signals and rely on the potential energy stored in transmembrane ion gradients, cost a large fraction of this energy2. These metabolic demands could be large enough to influence the design, function and evolution of brains3,4,5,6,7,8,9,10.
Population coding11, i.e., the cooperative coding of information of input signals by a group of neurons, is a basic neural code strategy used in many nervous systems12. Studies have shown that neuronal activities could be synchronized to remain robust against noise and promote the reliable transmission of information13,14,15. It was suggested16,17 that the number of neurons involved in synchronous neuronal activities is critical for reliable information delivery within a feed-forward multi-layer cortical network. However, for population coding in such an array network, the relationship between efficient energy consumption to information transmission and neuronal number has not been carefully considered, especially in the case of different background noise levels.
Moreover, background noise is present at all levels of the nervous system, from the microscopic level, such as stochastic ion channel gating in membranes and biochemical noise at synapses, to macroscopic levels18,19,20. The existence of noise may degrade the reliability of effective information transmission and requires the involvement of more neurons to perform an information-processing task15. Therefore, it is critical to address the issue of what is the proper size of a neuronal array network for reliable information transmission with minimal energy cost in a noisy environment.
In an earlier work, Barlow (1961)21 suggested sparseness as one of the principles that are important to sensory representation. Because sparse codes are defined as representations with low activity ratios—i.e., at any given time a small proportion of neurons are active—they are sometimes proposed as a means to help conserve metabolic costs. Levy and Baxter (1996)22 demonstrated that there should exist an optimal firing probability for any given mean firing rate in a neural network so that the capacity to represent sensory information is maximized while energy expenditure is minimized. Later, a quantitative measure of the metabolic cost of neural electrical signals generated by retina photoreceptors3 and action potential generation in axons was estimated by experimental as well as computational studies5,7,8,9,10,23,24,25. These studies have linked the energy efficiency of spiking with the number of the ion channels19, as well as the ratio of sodium to potassium channel density within a single neuron5,7,9,23,24,25. However, the way limited energy constrains the size and coding efficiency of a neuronal network, in the presence of different noise levels, has not been carefully examined. In the past few decades, a number of studies of stochastic resonance have shown that noise plays an important role in neural information processing for sub- or suprathreshold signals14,26,27. In this paper we have carried out a mathematical analysis and numerical simulations to investigate the issue of network energy coding efficiency and its dependence on the population size, and signal and noise intensity. We have considered a neuronal network with a type of excitable neuronal model.
We proceeded by first solving a stochastic one-dimensional bistable Langevin equation, which mimics action potential generation with a particle crossing the barrier of a double well, to obtain an analytical solution for the pulse signal detection rate and spontaneous firing rate28. Coincidence detector (CD)29,30,31,32 in the context of neurobiology is a process by which a neuron or a neural circuit can encode information by detecting the occurrence of temporally close but spatially distributed input signals from presynaptic neurons of a network. A number of reports31,32,33 have suggested that neuronal network with postsynaptic CD might be commonly used in different cortical areas to read synchronous activities against a noisy background. Here, we have constructed an array network model with N bistable neurons and a CD neuron to pool the network information, and calculated the mutual information and energy cost of the network.
Results
The bistable neuron model used here can be described with the following equation:
where v is the membrane potential, and U is a double well potential, defined as:
Note that U has two minima at , and a saddle point at . In the following calculation, by default. is a Gaussian white noise, with
and D the noise intensity. We assume the neuron to be in its resting state when the particle is in the left well and in the excited when the particle has crossed the barrier to the right well, due to noise or signal stimulation.
It is assumed that a force of short time duration moves the particle horizontally from the resting state to v′ in the region of the saddle point. When the force disappears, the particle drifts up to the region of the saddle point. Near the saddle point, the phase trajectories are repelled, causing the particle to accelerate away from the saddle-point region towards one of the two minima. According to Lecar and Nossal’s approach of linearizing around the saddle point, we can obtain the probability of finding the particle in the right well after a long enough time, i.e., the probability that a pulse input signal is detected by the neuron20,34,
where is the strength of the pulse inputs, and is the Gaussian error function. With no input signal, neurons fire spikes spontaneously under perturbation by noise. The spontaneous firing rate of a bistable neuron is derived by Kramers’ formula35:
Figure 1(a) shows the firing probability of the neuron as a function of input pulse strength Δv with different noise intensities D, described by Eq.4. It is clear that the firing threshold fluctuates depending on the strength of noise perturbation. The neuron fires in response to subthreshold inputs (Δv < 0) with the assistance of noise, which is well known as stochastic resonance28. The detection rate increases as the noise intensity increases. However, the noise sabotages the neuron’s reliability for suprathreshold inputs (Δv > 0), and the detection rate decreases as the noise intensity increases. For threshold inputs (Δv = 0), the detection rate is 50%, independent of noise intensity. Our results are consistent with previous simulation results for a Hodgkin-Huxley (HH) system (for example, see reference36 where the inputs are in the form of voltage pulses). Figure 1(b) shows that the spontaneous firing rate increases as a function of noise intensity. The same result was obtained by Zeng et al. with the stochastic simulation of the HH neuron37.
Next we consider the information capacity and energy cost of an array of N bistable neurons with a pulse signal as input whose intensity is distributed uniformly over the interval , with the probability density function and the mean input strength . The information in the synchronous firings of this N neuron array is pulled together by a CD neuron, see Fig. 1(c). The CD neuron receives the outputs of the N bistable neuron array and is excited if n () inputs arrive simultaneously, where θ is the threshold of the CD neuron. In response to pulse inputs, this network has two output values, i.e., R = {r| r = 1 if CD neuron fires, or = 0 if CD neuron fails to fire}. The conditional probability that the CD neuron fires when the input is ∆v is given by a cumulative binomial distribution
where is a binomial coefficient, and is the detection rate of the bistable neuron for pulse input with strength ∆v, determined by Eq. 4. The conditional probability that the CD neuron does not fire when the input is ∆v is given by
According to Bayes formula, the probability that the output is r can be obtained by
According to Shannon information theory38, the information between input S and output R is defined as
In our case, the input is continuous and the output is discrete, and thus, the summation must be rewritten as follows:
Finally, we obtain the following description of the mutual information for the CD neuron:
Figure 2(b) shows that , the average mutual information per single neuron can reach a global maximum when the network contains an optimal number of neurons for subthreshold signals (e.g., = −0.1). The optimal neuronal number becomes smaller when the noise intensity increases. For superthreshold signals (e.g., = 0.1), the average mutual information per single neuron can also be maximized by an optimal neuronal number. However, when the noise intensity decreases, the optimal neuronal number also decreases (see Fig. 2(b)). From Fig. 1(a), we see that for both subthreshold and superthreshold signals, Pc converges to 0.5 as the noise intensity increases. As a result, in Fig. 2(a,b), the optimal neuronal number moves to 20 with the CD threshold set to 10(results not shown). In contrast, with decreasing noise, to re-establish a maximum, the subthreshold signals require an increased number of neurons to compensate for the loss due to decreased Pc, whereas the suprathreshold signals must decrease the number of neurons to limit the excess due to decreased Pc. For both the subthreshold and superthreshold signals, the average mutual information per neuron can be maximized by an optimal noise intensity, displaying a classic subthreshold stochastic resonance phenomenon (see Fig. 2(c)) and suprathreshold stochastic resonance phenomenon (see Fig. 2(d))39.
For an N neuron array system, the total network energy expenditure in an action potential onset time interval Δt can be written as
where and are the energy costs related to action potential generation. For simplicity, we assume the energy cost of one action potential to be 1. is the energy cost of the spontaneous firings in unit time, and . is the energy cost of the action potentials in response to input pulses with strength , and if the inputs are applied at the beginning of this time interval and zero otherwise. Therefore, is the average energy cost of action potentials in response to an input pulse with distribution p(). Fig. 3(a) shows the dependence of , the average energy cost of each neuron in unit time, as a function of input pulse strength for different noise intensities. Note that when the noise is weak, as the spikes are mostly induced by the signals, the curves have similar behavior to the curves shown in Fig. 1(a). Interestingly, for subthreshold signals, e.g., = −0.1, Esingle increases as noise intensity D increases, while for superthreshold signals, e.g., = 0.1, Esingle first decreases and then increases slightly as noise intensity D increases (see Fig. 3(b)). For subthreshold signals, e.g., = −0.1, , the energy consumption of the whole system in unit time, increases monotonously with both noise intensity and neuronal number; see Fig. 3(c). For superthreshold signals, e.g., = 0.1, Esystem also increases with neuronal number. However, Esystem is large for weak noise intensity and becomes small for high noise intensity, see Fig. 3(d).
We now define a new measurement to quantify how efficiently the system utilizes a certain amount of energy in a certain amount of information transmission, i.e., energy cost per unit information transmission or coding energy cost:, where E is the average energy consumption in unit time per neuron, and I is the average mutual information between the inputs and outputs of the neuron array in unit time per neuron. Now, we have
Laughlin et al. found that in noise-limited signaling systems, a low capacity weak pathway transmits information more economically, which promotes the idea of distributed information coding among multiple pathways3. The analysis result for the array network supports this idea. Fig. 4(a) shows that for subthreshold signals, though the network detection rate for a single neuron is low, it yields a low coding energy cost in the information coding process for weak noise intensity. Moreover, our results show that the coding energy cost passes through a global minimum as a function of neuronal number within the network for different noise intensities. The optimal neuronal number Nopt, corresponding to the minimum coding energy cost , shifts to the smaller number as noise intensity increases. For a suprathreshold stimulus, the coding energy cost also passes through a global minimum as a function of neuronal number. However, the optimal neuronal number corresponding to the minimum coding energy cost shifts to the larger number side as noise intensity increases (see Fig. 4(b)). Interestingly, as the noise intensity increases, the optimal neuronal number Nopt for both sub- and suprathreshold signals converges to a small range between N = 15 and 25 (see Fig. 4(c)). This convergence occurs because the optimal neuronal number for maximal mutual information, as we discussed above (Fig. 2(a,b)), will converge from the opposite direction to 20 in the case of the large noise limit, recalling that the energy cost does not change greatly for different noise intensities. Moreover, we found that for a given noise intensity (e.g., D = 0.5), the maximal input pulse frequency that the bistable neurons can receive, which is the inverse of the action potential onset time , can significantly modulate the values of either Nopt or for different input pulse intensities, suggesting an energy saving mechanism for information coding in higher frequency bands, as observed in recent experimental findings25.
Discussion
Consuming only several watts of energy, mammalian brains are able to carry out 1000 trillion operations per second40. The biophysical mechanism of this extremely efficient energy expenditure is still not fully known. In a real living brain circuit, background noise is present at all levels of the nervous system, from the microscopic level, such as channel noise in membranes and biochemical noise at synapses, to macroscopic levels, such as a small neuronal circuit composed of several to tens of neurons. The existence of noise may degrade the reliability of effective information transmission and requires the involvement of more neurons to perform an information-processing task15. For example, small neurons will cost less energy because fewer ion channels are involved, thus requiring less ion exchange through the ion pumps that drive ATPase Na + /K + exchanges after action potentials9. However, the stochastic nature of ion channel gating will not only produce variability in the response of neuron to external stimuli but also cause spontaneous action potentials, damaging the reliability of signal processing18. In this case, trade-offs between information transfer and energy cost may constrain the proper number of ionic channels in individual neurons19,20 as well as the proper size of neuronal number in a neuronal network. There should exist a general rule for energy consumption, neural information transmission and network size.
Earlier theoretical studies21,22 suggested that an energy efficient coding of a neuronal network should be one where a small fraction of neurons use low firing rates to encode each component of input signal, leading to the concept of sparse coding. However, our theoretical analysis suggests that involvement of background noise at different intensities should justify this sparse coding theory. Noise brings two things to the neural population coding. First, a low level of noise may deliver energy to neurons within networks to fire more electronic signals in response to input. Second, increased noise may degrade reliability of a population response in coding the input signal, thus requiring more recruited neurons to ensure reliable response against noise distortion. The trade-offs between a reliable neural code and limited available energy result in an optimal number of neurons in maximal information transmission and minimal energy cost for a given noise level. Here, introducing the bistable model makes it feasible to analyze directly the input-output response function to the signals in noise environment, and provide a general solution for the energy-constrained efficient information transmission process. Since the bistable state describes the action potential initiation process in HH systems, the results presented here can be applied to all types of excitable neuronal models and real neurons. In addition, although our work focused on the effects of system size on energy efficiency, it could be extended to include the effects of spike correlation, stimulus and noise distribution on the energy efficiency, based on the recent progress on suprathreshold stochastic resonance14,27,41. Our analysis considered in detail the contribution of noise to information transmission regarding sub- and super-threshold stochastic resonance, while this could not have been derived from pure binary neurons in previous studies22.
The model presented is not complex enough to include details of ionic channel properties contributing to an energy efficient spiking process. Other recent research has focused on this issue and found5,7,8,9, 23,25 that there may exist an optimal number of ionic channels19 or an appropriate ratio of sodium to potassium channel density to generate energy efficient action potentials5,9,23,25. Therefore, for mature neurons with stabilized ionic channel densities and kinetics, in stable environmental conditions, our model rather considered a more general situation of a network case in the presence of noise, and proved that the coding energy cost per unit information goes through a global minimum with an optimal neuronal number depending on given noise intensity. In addition, the implications of achieved results with the dynamical model used here may be limited by it having fixed unit energy per spike. There is an additional possibility as reported by a few experimental and computational reports that the energy efficiency per spike may vary with stimuli conditions, which may add additional capacity of neural coding efficiency42,43,44. This would be worth further examination with a more realistic Hodgkin-Huxley neuronal model with proper ionic channels.
To summarize, in this paper, we examined the energy efficiency of an information coding process based on a neuronal array network composed of N simple bistable neurons and a CD detector neuron. We have provided an analytical solution that quantifies the relationships among the energy cost per neuron, mutual information, noise intensity, signal intensity and frequency, and neuronal number required in the circuit for effective information transmission. The novel result obtained here reveals a general rule of energetics related to population coding that there exists an optimal number of neurons in the network necessary for maximal information coding with minimal energy cost, and the optimum depends on the noise intensity, input pulse strength and frequency. The results reflect general mechanisms for sensory coding processes, which may give insight into energy efficient brain communication and neural coding.
Additional Information
How to cite this article: Yu, L. et al. Energy-efficient population coding constrains network size of a neuronal array system. Sci. Rep. 6, 19369; doi: 10.1038/srep19369 (2016).
Acknowledgments
This work was supported by China 863 program (2015AA020508), the National Natural Science Foundation of China (Grant No. 11105062) and the Fundamental Research Funds for the Central Universities (Grant No. lzujbky-2011-57 and No. lzujbky-2015-119). Dr. Yu also offers thanks for the support from the National Natural Science Foundation of China (31271170, 51408429) and the program for the Professor of Special Appointment (Eastern Scholar SHH1140004) at Shanghai Institutions of Higher Learning. This work was partially supported by the open project of State Key Laboratory of Theoretical Physics under Grant No. Y4KF141CJ1.
Footnotes
Author Contributions Y.Y. and L.Y. designed research; L.Y., C.Z., L.L. and Y.Y. performed research; L.Y. and Y.Y. wrote the paper. All authors reviewed the manuscript.
References
- Glazner K. A. et al. Strain specific differences in memory and neuropathology in a mouse model of Alzheimer’s disease. Life Sci 86, 942–950 (2010). [DOI] [PubMed] [Google Scholar]
- Attwell D. & Laughlin S. B. An energy budget for signaling in the grey matter of the brain. J Cereb Blood Flow Metab 21, 1133–1145 (2001). [DOI] [PubMed] [Google Scholar]
- Laughlin S. B., de Ruyter van Steveninck R. R. & Anderson J. C. The metabolic cost of neural information. Nature neuroscience 1, 36–41 (1998). [DOI] [PubMed] [Google Scholar]
- Laughlin S. B. & Sejnowski T. J. Communication in neuronal networks. Science 301, 1870–1874 (2003). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hasenstaub A., Otte S., Callaway E. & Sejnowski T. J. Metabolic cost as a unifying principle governing neuronal biophysics. Proceedings of the National Academy of Sciences of the United States of America 107, 12329–12334 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Niven J. E. & Laughlin S. B. Energy limitation as a selective pressure on the evolution of sensory systems. The Journal of experimental biology 211, 1792–1804 (2008). [DOI] [PubMed] [Google Scholar]
- Sengupta B., Stemmler M., Laughlin S. B. & Niven J. E. Action potential energy efficiency varies among neuron types in vertebrates and invertebrates. PLoS computational biology 6, e1000840 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yu Y., Hill A. P. & McCormick D. A. Warm body temperature facilitates energy efficient cortical action potentials. PLoS computational biology 8, e1002456 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sengupta B., Faisal A. A., Laughlin S. B. & Niven J. E. The effect of cell size and channel density on neuronal information encoding and energy efficiency. J Cereb Blood Flow Metab 33, 1465–1473 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sengupta B., Laughlin S. B. & Niven J. E. Balanced excitatory and inhibitory synaptic currents promote efficient coding and metabolic efficiency. PLoS Comput Biol 9, e1003263 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dayan P. & Abbott L. F. Theoretical Neuroscience: Computational And Mathematical Modeling of Neural Systems. (Massachusetts Institute of Technology Press, 2005). [Google Scholar]
- Georgopoulos A. P., Schwartz A. B. & Kettner R. E. Neuronal population coding of movement direction. Science 233, 1416–1419 (1986). [DOI] [PubMed] [Google Scholar]
- Yu Y., Liu F., Wang W. & Lee T. S. Optimal synchrony state for maximal information transmission. Neuroreport 15, 1605–1610 (2004). [DOI] [PubMed] [Google Scholar]
- Chen Y., Yu L. & Qin S. M. Detection of subthreshold pulses in neurons with channel noise. Phys Rev E Stat Nonlin Soft Matter Phys 78, 051909 (2008). [DOI] [PubMed] [Google Scholar]
- Tabareau N., Slotine J. J. & Pham Q. C. How synchronization protects from noise. PLoS Comput Biol 6, e1000637 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Diesmann M., Gewaltig M. O. & Aertsen A. Stable propagation of synchronous spiking in cortical neural networks. Nature 402, 529–533 (1999). [DOI] [PubMed] [Google Scholar]
- Wang S., Wang W. & Liu F. Propagation of firing rate in a feed-forward neuronal network. Phys Rev Lett 96, 018103 (2006). [DOI] [PubMed] [Google Scholar]
- Schneidman E., Freedman B. & Segev I. Ion channel stochasticity may be critical in determining the reliability and precision of spike timing. Neural Comput 10, 1679–1703 (1998). [DOI] [PubMed] [Google Scholar]
- Schreiber S., Machens C. K., Herz A. V. & Laughlin S. B. Energy-efficient coding with discrete stochastic events. Neural computation 14, 1323–1346 (2002). [DOI] [PubMed] [Google Scholar]
- Yu L. & Liu L. Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals. Physical Review E 89, 032725 (2014). [DOI] [PubMed] [Google Scholar]
- Barlow H. Possible principles underlying the transformation of sensory messages. Sensory Communication, 217–234 (1961). [Google Scholar]
- Levy W. B. & Baxter R. A. Energy efficient neural codes. Neural Comput 8, 531–543 (1996). [DOI] [PubMed] [Google Scholar]
- Alle H., Roth A. & Geiger J. R. Energy-efficient action potentials in hippocampal mossy fibers. Science 325, 1405–1408 (2009). [DOI] [PubMed] [Google Scholar]
- Carter B. C. & Bean B. P. Sodium entry during action potentials of mammalian neurons: incomplete inactivation and reduced metabolic efficiency in fast-spiking neurons. Neuron 64, 898–909 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lewis J. E., Gilmour K. M., Moorhead M. J., Perry S. F. & Markham M. R. Action potential energetics at the organismal level reveal a trade-off in efficiency at high firing rates. The Journal of neuroscience : the official journal of the Society for Neuroscience 34, 197–201 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- McDonnell M. D., Stocks N. G. & Abbott D. Optimal stimulus and noise distributions for information transmission via suprathreshold stochastic resonance. Phys Rev E Stat Nonlin Soft Matter Phys 75, 061105 (2007). [DOI] [PubMed] [Google Scholar]
- Durrant S., Kang Y., Stocks N. & Feng J. Suprathreshold stochastic resonance in neural processing tuned by correlation. Phys Rev E Stat Nonlin Soft Matter Phys 84, 011923 (2011). [DOI] [PubMed] [Google Scholar]
- Gammaitoni L., Marchesoni F., Menichella-Saetta E. & Santucci S. Stochastic resonance in bistable systems. Phys Rev Lett 62, 349–352 (1989). [DOI] [PubMed] [Google Scholar]
- Konig P., Engel A. K. & Singer W. Integrator or coincidence detector ? The role of the cortical neuron revisited. Trends Neurosci 19, 130–137 (1996). [DOI] [PubMed] [Google Scholar]
- Abeles M. Role of the cortical neuron: integrator or coincidence detector ? Isr J Med Sci 18, 83–92 (1982). [PubMed] [Google Scholar]
- Abbott L. F. & Nelson S. B. Synaptic plasticity: taming the beast. Nat Neurosci 3 Suppl, 1178–1183 (2000). [DOI] [PubMed] [Google Scholar]
- Averbeck B. B., Latham P. E. & Pouget A. Neural correlations, population coding and computation. Nat Rev Neurosci 7, 358–366 (2006). [DOI] [PubMed] [Google Scholar]
- Ratte S., Hong S., De Schutter E. & Prescott S. A. Impact of neuronal properties on network coding: roles of spike initiation dynamics and robust synchrony transfer. Neuron 78, 758–772 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lecar H. & Nossal R. Theory of threshold fluctuations in nerves. II. Analysis of various sources of membrane noise. Biophys J 11, 1068–1084 (1971). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gardiner C. Stochastic Methods: A Handbook for the Natural and Social Sciences. (Springer Berlin Heidelberg, 2009). [Google Scholar]
- Adair R. K. Noise and stochastic resonance in voltage-gated ion channels. Proc Natl Acad Sci USA 100, 12099–12104 (2003). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeng S. & Jung P. Mechanism for neuronal spike generation by small and large ion channel clusters. Phys Rev E Stat Nonlin Soft Matter Phys 70, 011903 (2004). [DOI] [PubMed] [Google Scholar]
- Shannon C. E. The mathematical theory of communication. 1963. MD Comput 14, 306–317 (1997). [PubMed] [Google Scholar]
- Stocks N. G. Suprathreshold stochastic resonance in multilevel threshold systems. Phys Rev Lett 84, 2310–2313 (2000). [DOI] [PubMed] [Google Scholar]
- Kandel E. R., Schwartz J. H. & Jessell T. M. Principles of neural science. (McGraw-Hill, Health Professions Division, 2000). [Google Scholar]
- Sasaki H. et al. Suprathreshold stochastic resonance in visual signal detection. Behav Brain Res 193, 152–155 (2008). [DOI] [PubMed] [Google Scholar]
- Shu Y. S., Hasenstaub A., Duque A., Yu Y. G. & McCormick D. A. Modulation of intracortical synaptic potentials by presynaptic somatic membrane potential. Nature 441, 761–765 (2006). [DOI] [PubMed] [Google Scholar]
- Yi G. S., Wang J., Tsang K. M., Wei X. L. & Deng B. Input-output relation and energy efficiency in the neuron with different spike threshold dynamics. Frontiers in computational neuroscience 9, 62 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- de Polavieja G. G., Harsch A., Kleppe I., Robinson H. P. & Juusola M. Stimulus history reliably shapes action potential waveforms of cortical neurons. The Journal of neuroscience : the official journal of the Society for Neuroscience 25, 5657–5665 (2005). [DOI] [PMC free article] [PubMed] [Google Scholar]