Skip to main content
Frontiers in Computational Neuroscience logoLink to Frontiers in Computational Neuroscience
. 2023 Nov 17;17:1286681. doi: 10.3389/fncom.2023.1286681

Corrigendum: Riemannian geometry-based metrics to measure and reinforce user performance changes during brain-computer interface user training

Nicolas Ivanov 1,2, Tom Chau 1,2,*
PMCID: PMC10691671  PMID: 38045092

In the published article, there was an error. An additional detail regarding the computation of the weighted average classDistinct and classStability metrics was missing.

A correction has been made to Section 2. Materials and methods, subsection “2.1 Performance metric design”, paragraph 7. This sentence previously stated:

“The intra-class dispersion is computed using:

(1α2)Φk1,c+α2ϕk,c (6)

where α2 ∈ [0, 1] is a constant, ϕk−1, c is the intra-class dispersion for the class c trials of the (k-1)th block, and ϕk, c is the intra-class dispersion of class c trials computed only during the current (kth) block.”

The corrected sentences appear below:

“For the weighted average classDistinct and classStability metrics, we made the following modification to the calculation of the intra-class dispersion. We split the set of trials, T, into Ns subsets of Nt trials, Tj, such that

T1T2TNs=T.

Subsets were formed by splitting trials according to the chronological order in which they were performed; for example, the first Nt trials performed during a block would be grouped into subset T1. Using these subsets, we computed a modified intra-class dispersion as:

Φ*=1Ns1Ntj=1Nsi=1NtδR(Γ¯Tj, ΓTj,i ) 

where Ns is the number of trial subsets, Nt is the number of trials in each subset, Γ¯Tj is the mean covariance matrix of trials within the jth subset of trials, ΓTj, i is the covariance matrix of the ith trial within subset Tj, and δR denotes the Riemannian distance. The motivation behind this modification was to reduce the impact of signal non-stationarities that may artificially increase the intra-class dispersion when considering a large number of trials. For our analysis, we set Nt = 5. Trial subsets were disjoint save for when computing within-block post-trial intra-class dispersion values. If the number of trials completed within the block was not divisible by Nt, subset TNs was formed using the most recently completed Nt trials; consequently, this subset could share up to Nt−1 trials with subset TNs − 1.

The post-trial intra-class dispersion was computed using this modified intra-class dispersion:

(1α2)Φk1,c+α2ϕk,c (6)

where α2 ∈ [0, 1] is a constant, Φk-1,c* is the modified intra-class dispersion for the class c trials of the (k−1)th block, and ϕk,c* is the modified intra-class dispersion of class c trials completed only during the current (kth) block.”

The authors apologize for this error and state that this does not change the scientific conclusions of the article in any way. The original article has been updated.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.


Articles from Frontiers in Computational Neuroscience are provided here courtesy of Frontiers Media SA

RESOURCES