Skip to main content
. 2023 Jun 14;43(24):4498–4512. doi: 10.1523/JNEUROSCI.2228-22.2023

Figure 7.

Figure 7.

Schematics of eigen decomposition of information. A, The information contained by trial-by-trial high-dimensional population responses can be calculated in the eigenspace (obtained by principal component analysis) instead of the original voxel space (Eq. 13). Thanks to the linear independence of the eigenspace, the information of the whole population can be simply reformulated as the summation of information along each eigendimension iN(df*vi)2λi (Eq. 13). B, Two 2 d response distributions of two voxels toward two stimuli. The yellow and blue ellipses also show the direction of covariances (Q1 and Q2). The red vector (df) can be viewed as the Euclidean distance between the two distributions (Eq. 2 with the assumption of ds = 1). C, The gray ellipse depicts the averaged covariance Q (averaged Q1 and Q2; Eq. 3). D, The averaged covariance can be decomposed into two principal components (PC1 and PC2). E, The variance λi along each PC is illustrated. Intervoxel RCs result in a larger variance in PC1 (λ1) than PC2 (λ2). F, The squared projected signals (df*vi)2 on each PC are illustrated. Note that the sum of squared projected signals [i.e., the sum of bars in F, iN(df*vi)2)] is a constant, which amounts to the norm of the signal vector df.