Skip to main content
Frontiers in Synaptic Neuroscience logoLink to Frontiers in Synaptic Neuroscience
. 2011 Sep 6;3:5. doi: 10.3389/fnsyn.2011.00005

One Cell to Rule Them All, and in the Dendrites Bind Them

Rui P Costa 1,*, P Jesper Sjöström 2
PMCID: PMC3167096  PMID: 21922007

More than 60 years ago, the McGill University professor Donald Hebb published his famous postulate stating that to store a memory trace, the connection from a neuron that persistently helps activate another one should be strengthened (Hebb, 1949). Inspired by Hebb's postulate, Rosenblatt (1958) a decade later introduced the perceptron learning machine as a simplified model of information storage and retrieval in the brain. This model was able to perform binary classification through a learning rule that altered synaptic weights, and thereby created big expectations in the field of artificial neural networks: Here was a simple neural-network-like machine that could learn to recognize patterns and to tell them apart.

The excitement was not long-lived, however. Minsky and Papert (1969) proved that a single-layer perceptron is only capable of learning linearly separable patterns, which means it cannot learn a XOR function, as this would require it to respond when one, or the other input is active but not both. Individual perceptrons were thus inherently flawed, it seemed. Minsky and Papert's findings were therefore widely but erroneously interpreted to mean that all perceptrons suffered from the same problem, even though they had in actuality shown that multi-layer perceptrons had the capacity for non-linear computations. Nevertheless, the winter of connectionism research had arrived, and it required around a decade for interest in the field to be revived, after developments by pioneers such as Stephen Grossberg, John Hopfield, and David Rumelhart (Abbott, 2008). Yet even after this revival it has remained unclear what types of non-linear computations are possible to execute in individual neurons of the actual brain.

In more recent years, synaptic plasticity theory has been extended to include the precise timing of spikes in pre and postsynaptic neurons, based on theoretical as well as experimental studies (Gerstner et al., 1996; Markram et al., 1997). This has led to the development of the spike-timing-dependent plasticity (STDP) paradigm, which has caused great interest as a biologically plausible neuronal basis for information storage in the brain, in particular for the learning of causal relationships, as it is temporally sensitive (Markram et al., 2011).

As was the case in Rosenblatt's (1958) perceptron paper, the vast majority of theoretical synaptic plasticity studies treat neurons as points in space, entirely devoid of dendritic arborizations. There has been an ongoing debate in the field regarding the extent to which dendrites are important for computations in the brain; perhaps they are merely an epiphenomenal bug rather than a feature (Häusser and Mel, 2003)? Hebb (1972) took an interestingly extreme view and surmised that dendrites are merely there to connect and therefore serve no purpose in plasticity. But dendrites are key to distinguishing neuronal types – the fan-shaped dendritic tree typifies the Purkinje cells, while the ascending thick-tufted dendritic arbor defines the neocortical layer-5 pyramidal cell – so it would seem strange if dendrites did nothing more than to hook cells up to each other (Sjöström et al., 2008). Indeed, recent studies have shown that synaptic plasticity depends on the location of a synapse in the dendritic tree (Sjöström and Häusser, 2006) and that dendritic branches themselves are plastic (Losonczy et al., 2008). By measuring the coupling between local dendritic spikes and the soma before and after a synaptic plasticity induction protocol in the hippocampus, Losonczy et al. (2008) discovered that dendrites too are plasticity. Based on their findings, they proposed the existence of a branch-strength potentiation (BSP) cellular learning rule, which is input-specific to a degree, suggesting that individual dendritic compartments could be involved in storing spatio-temporal features. But why is BSP needed? After all, it would seem that Hebbian learning in general and STDP in particular provide sufficient means for information storage in the brain.

In a recently published study, Legenstein and Maass (2011) attacked this key issue using an entirely theoretical approach. They introduced a new experimentally based phenomenological model that brought together the STDP and BSP learning rules. They applied their model to a simple feature-binding problem, in which cell assemblies coding for different features (e.g., yellow, star, black, and disk) were randomly connected to the branches of the postsynaptic cell. The neuron was then trained on pairs of features, such as yellow + star and black + disk, after which the neuron responded correctly to pairs of trained features, but not to other combinations such as yellow + disk. This feature was due to the emergence of synaptic clustering and competition between dendritic branches that resulted from the interplay between STDP and BSP, allowing a single neuron to bind input features in a self-organized manner.

Despite the interesting features emerging from this model, and as happened with the perceptron, the Legenstein–Maass model was not able to solve the XOR problem (i.e., responding to either pair of features, but not to both pairs together). Indeed, the XOR problem might only be solvable at the network level, requiring inhibitory interneurons to do so. Nevertheless, whether a single neuron of the brain can or cannot perform non-linear pattern separation remains an open question (Sjöström et al., 2008). It would also be interesting to know the information storage capacity of such Legenstein–Maass neurons. Finally, although STDP is necessary in their model, Hebbian learning together with synaptic scaling (Turrigiano et al., 1998) are likely to yield similar results.

The take-home message of the study of Legenstein and Maass (2011) is that individual neurons can potentially operate as small networks in their own right, binding features at the single-cell level. This suggests a form of dendritic homunculus, which can dendritically bind specific feature combinations via a combination of STDP and BSP, thus acting as a substrate for the correlation theory of brain function (von der Malsburg, 1981) as well as for the binding problem (Treisman, 1996). The Legenstein–Maass study is therefore relevant to several disciplines, including experimental and theoretical neuroscience as well as psychology.

References

  1. Abbott L. F. (2008). Theoretical neuroscience rising. Neuron 60, 489–495 10.1016/j.neuron.2008.10.019 [DOI] [PubMed] [Google Scholar]
  2. Gerstner W., Kempter R., van Hemmen J. L., Wagner H. (1996). A neuronal learning rule for sub-millisecond temporal coding. Nature 383, 76–78 10.1038/383076a0 [DOI] [PubMed] [Google Scholar]
  3. Häusser M., Mel B. (2003). Dendrites: bug or feature? Curr. Opin. Neurobiol. 13, 372–383 10.1016/S0959-4388(03)00075-8 [DOI] [PubMed] [Google Scholar]
  4. Hebb D. O. (1949). The Organization of Behavior: A Neuropsychological Theory. New York: Wiley [Google Scholar]
  5. Hebb D. O. (1972). A Textbook of Psychology. Philadelphia, PA: W. B. Saunders Company [Google Scholar]
  6. Legenstein R., Maass W. (2011). Branch-specific plasticity enables self-organization of nonlinear computation in single neurons. J. Neurosci. 31, 10787–10802 10.1523/JNEUROSCI.5684-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Losonczy A., Makara J. K., Magee J. C. (2008). Compartmentalized dendritic plasticity and input feature storage in neurons. Nature 452, 436–441 10.1038/nature06725 [DOI] [PubMed] [Google Scholar]
  8. Markram H., Gerstner W., Sjöström P. J. (2011). The history of spike-timing-dependent plasticity. Front. Syn. Neurosci. 3:4. 10.3389/fnsyn.2011.00004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Markram H., Lübke J., Frotscher M., Sakmann B. (1997). Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science 275, 213–215 10.1126/science.275.5297.213 [DOI] [PubMed] [Google Scholar]
  10. Minsky M., Papert S. (1969). Perceptrons. Cambridge, MA: MIT Press [Google Scholar]
  11. Rosenblatt F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386–408 10.1037/h0042519 [DOI] [PubMed] [Google Scholar]
  12. Sjöström P. J., Häusser M. (2006). A cooperative switch determines the sign of synaptic plasticity in distal dendrites of neocortical pyramidal neurons. Neuron 51, 227–238 10.1016/j.neuron.2006.06.017 [DOI] [PubMed] [Google Scholar]
  13. Sjöström P. J., Rancz E. A., Roth A., Häusser M. (2008). Dendritic excitability and synaptic plasticity. Physiol. Rev. 88, 769–840 10.1152/physrev.00016.2007 [DOI] [PubMed] [Google Scholar]
  14. Treisman A. (1996). The binding problem. Curr. Opin. Neurobiol. 6, 171–178 10.1016/S0959-4388(96)80070-5 [DOI] [PubMed] [Google Scholar]
  15. Turrigiano G. G., Leslie K. R., Desai N. S., Rutherford L. C., Nelson S. B. (1998). Activity-dependent scaling of quantal amplitude in neocortical neurons. Nature 391, 892–896 10.1038/36103 [DOI] [PubMed] [Google Scholar]
  16. von der Malsburg C. (1981). The Correlation Theory of Brain Function. Internal Report 81-2. Göttingen: Max-Planck-Institute for Biophysical Chemistry [Google Scholar]

Articles from Frontiers in Synaptic Neuroscience are provided here courtesy of Frontiers Media SA

RESOURCES