Skip to main content
. Author manuscript; available in PMC: 2020 Apr 3.
Published in final edited form as: Neuron. 2019 Feb 13;102(1):232–248.e11. doi: 10.1016/j.neuron.2019.01.029

Figure 8. Population decoding reveals that multimodal representations emerge where modalities overlap.

Figure 8.

(A) Schematic of the encoding model and decoding. A separate encoding model was fit for each neuron. Population decoding was performed on local groups of ~50 neurons. A prior model was used to account for correlations between the decoded variable and other task variables (Methods).

(B) Maps of decoding accuracy for the modalities that were explicitly decorrelated in this study: angular and linear screen and ball velocity. Left and center-left: Decoding maps for the single-modality (screen and ball) features. Center-right: to illustrate where decoding accuracy of the screen and ball single-modality maps overlapped, the screen and ball maps were multiplied pixel-wise. Right: maps for the multi-modal feature (drift).

(C) Mean decoding performance (Rneural2), split by layer. Error bars, 5th and 95th percentile of hierarchical bootstrap of the mean. Number of neurons: layer 2/3, 11639, layer 5, 3717. Colors indicate area as in legend below.

(D) Maps as in (B), but showing decoding accuracy for open-loop timepoints using an encoding model fit on closed-loop timepoints. Plots next to maps show mean Rneural2 by area. For these plots, closed-loop accuracy was computed using only the timepoints that were later replayed during open-loop segments. Error bars, 5th and 95th percentile of hierarchical bootstrap of the mean.

(E) Maps of decoding accuracy for long-timescale integration features.

See also Figure S8.