Skip to main content
. Author manuscript; available in PMC: 2019 Dec 1.
Published in final edited form as: Neuroimage. 2018 Aug 21;183:425–437. doi: 10.1016/j.neuroimage.2018.08.022

Table A.5:

Notations: Note that throughout this paper, we refer to matrices with bold capital letters (e.g., A), vectors with small bold letters (e.g., a), and scalars or functions with all non-bold letters. aji is the scalar in row i and column j of A, while ai the ith row and aj the jth column of A.

Notation Description
N Number of training samples
d Dimensionality of the feature vectors
d′ The dimensionality of the selected features set
Xd×N Feature matrix of all samples
y1×N The class labels for each of the samples
Xd×N The new reduced feature matrix, after feature selection
k(x, xn) Subkernel function between the two samples x and xn
α Weights vector learned to aggregate subkernels into a kernel
k(x, xn, α) Aggregate kernel of the two samples x and xn, using weights α
a1 The ℓ1 norm of vector a (i.e., a1=iai)
a2 The ℓ2 norm of vector a (i.e., a2=(iai2)12)
A2,1 The ℓ2,1 norm of the matrix A (i.e., A2,1j(iaji2)12)
0 The set of non-negative real numbers