Skip to main content
eLife logoLink to eLife
. 2023 Mar 13;12:e87507. doi: 10.7554/eLife.87507

Correction: Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network

Ian Cone, Harel Z Shouval
PMCID: PMC10010686  PMID: 36912878

Cone I, Shouval HZ. 2021. Learning precise spatiotemporal sequences viabiophysically realistic learning rules in a modular, spiking network. eLife 10:e63751. doi: 10.7554/eLife.63751.

Published 18 March 2021

A recent publication (Zajzon et al., 2023), which reproduced this paper’s Markovian sequence model in NEST (Zajzon et al., 2022), has made us aware of an error in the original code included in the data availability. Specifically, the original Markovian code (which we hereafter refer to as Markovian FF) improperly restricted Messenger to Timer intercolumnar connections such that they were limited to be feed-forward (i.e. ordinal to the presented sequence). We thank Zajzon et al., 2023 for also proposing a fix (via raising the feed-forward threshold, as described below). An updated version of the Markovian code (which we hereafter refer to as Markovian A2A), which correctly allows for any (all-to-all or A2A) Messenger to Timer intercolumnar connections, is now included (along with the previous code, MATLAB FF) in the ModelDB upload. Supplementary File 1 and 2 have been updated to include accurate and correctly scaled values, for both Markovian FF and Markovian A2A. Additionally, an inconsistency in Equations 8 and 9 has been corrected, and text in the Materials and methods has been added to highlight the importance of learning thresholds rth and rthFF. We thank the authors of Zajzon et al., 2023 for their contributions to recognizing and rectifying these errors.

The corrected parameters in Markovian A2A are as follows:

· Feed-forward learning threshold rthFF = 30Hz

· Saturation level for feed-forward LTD eligibility trace Tdmax,FF = .0045

· Activation rate for feed-forward LTP eligibility trace hpFF = 8.8 x 3500 ms-1

· Activation rate for feed-forward LTD eligibility trace hdFF = 10 x 3500 ms-1

· Feed-forward learning rate hFF = 0.4

The original parameters in Markovian FF were set as:

· Feed-forward learning threshold rthFF = 20Hz

· Saturation level for feed-forward LTD eligibility trace Tdmax,FF = .00345

· Activation rate for feed-forward LTP eligibility trace hpFF = 20 x 3500 ms-1

· Activation rate for feed-forward LTD eligibility trace hdFF = 15 x 3500 ms-1

· Feed-forward learning rate hFF = 0.25

The corrected equations 8 and 9 read the following:

τpdTijpdt=-Tijp+ηpHijTmaxp-Tijp(8)
τddTijddt=-Tijd+ηdHijTmaxd-Tijd(9)

The original equations 8 and 9 were published as:

τpdTijpdt=-Tijp+ηpHijTmaxp-TijpTmaxp(8)
τddTijddt=-Tijd+ηdHijTmaxd-TijdTmaxd(9)

The following text has been appended to the Materials and methods:

“Our Hebbian term is also subject to rate thresholds rth and rthFF in the recurrent and feed-forward cases, which we further discuss at the end of this section.”

“However, in the presence of noise, modules can have spurious activity overlaps which cause non-zero Hij and therefore potentiation of weights Wij which are non-sequential. This can lead to network instability and a failure to encode the presented sequence. To account for this, rate thresholds rth and rthFF are included in the Hebbian term Hij. By setting these thresholds above the effective noise level (see Supplementary File 1), the Hebbian overlap Hij used to activate traces ignores the random, noise-driven overlaps. Crucially, as rthFF dictates the sensitivity to inter-columnar overlaps, it is a critical and necessary parameter to enable the all-to-all connectivity of the model.”

The article has been corrected accordingly.

Additional files

Supplementary file 1. Table of main model parameters.
elife-87507-supp1.docx (36.3KB, docx)
Supplementary file 2. Table of reservoir, sparse net, and rate-based model parameters.
elife-87507-supp2.docx (32.9KB, docx)

References

  1. Zajzon B, Duarte R, Morrison A. Towards reproducible models of sequence learning: replication and analysis of a modular spiking network with reward-based learning. 7625737Github. 2022 doi: 10.3389/fnint.2023.935177. https://github.com/zbarni/re_modular_seqlearn [DOI] [PMC free article] [PubMed]
  2. Zajzon B, Duarte R, Morrison A. Towards reproducible models of sequence learning: replication and analysis of a modular spiking network with reward-based learning. bioRxiv. 2023 doi: 10.1101/2023.01.18.524604. [DOI] [PMC free article] [PubMed]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary file 1. Table of main model parameters.
elife-87507-supp1.docx (36.3KB, docx)
Supplementary file 2. Table of reservoir, sparse net, and rate-based model parameters.
elife-87507-supp2.docx (32.9KB, docx)

Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

RESOURCES