Skip to main content
. 2018 May 29;7:e33904. doi: 10.7554/eLife.33904

Figure 2. Decoding performance of perception and imagery over time.

(A) Decoding accuracy from a classifier that was trained and tested on the same time points. Filled areas and thick lines indicate significant above chance decoding (cluster corrected, p<0.05). The shaded area represents the standard error of the mean. The dotted line indicates chance level. For perception, zero signifies the onset of the stimulus, for imagery, zero signifies the onset of the retro-cue. (B) Temporal generalization matrix with discretized accuracy. Training time is shown on the vertical axis and testing time on the horizontal axis. Significant clusters are indicated by black contours. Numbers indicate time points of interest that are discussed in the text. (C) Proportion of time points of the significant time window that had significantly lower accuracy than the diagonal, that is specificity of the neural representation at each time point during above chance diagonal decoding (D) Source level contribution to the classifiers at selected training times. Source data for the analyses reported here are available in Figure 2—source data 1.

Figure 2—source data 1. Temporal dynamics within perception and imagery.
The files '...Acc.mat' contain the subjects x time x time decoding accuracy within perception and imagery. The files '...Sig.mat' contain the time x time cluster p values of the comparison of these accuracies with chance level.
DOI: 10.7554/eLife.33904.008
Figure 2—source data 2. Decoding from eye tracker data. .
The file '..eyeAcc.mat' contains the subject x time decoding accuracies from the x and y eye tracker channels within perception and imagery. The file '.. EyeSig.mat' contains the cluster based p values of these accuracies compared to chance level. 
DOI: 10.7554/eLife.33904.009
Figure 2—source data 3. Vividness median split.
The file '...ImaVivSplit.mat contains the subjects x time x time decoding accuracy within imagery for the high vividness and low vividness group and the time x time p values of the comparison between the two groups. The file '...PercVivSplit.mat' contains the same for the decoding within perception. 
DOI: 10.7554/eLife.33904.010

Figure 2.

Figure 2—figure supplement 1. Decoding results throughout the entire imagery period.

Figure 2—figure supplement 1.

(A) Temporal generalization matrix. Training time is shown on the vertical axis and testing time on the horizontal axis. (B) Decoding accuracy from a classifier that was trained and tested on the same time points during imagery. (C) For each testing point, the proportion of time points that resulted in significantly lower accuracy than the diagonal decoding at that time point, that is the temporal specificity of the representations over time. Source data for the analyses reported here are available in Figure 2—source data 1.
Figure 2—figure supplement 2. Decoding on eye tracker data.

Figure 2—figure supplement 2.

(A) Decoding accuracy over time on eye tracker data during perception. Filled areas and thick lines indicate significant above chance decoding (cluster corrected, p<0.05). The shaded area represents the standard error of the mean. The dotted line indicates chance level. (B) Decoding accuracy over time on eye tracker data during imagery. (C) Correlation over participants between eye tracker decoding accuracy and brain decoding accuracy, averaged over the period during which eye tracker decoding was significant. Source data for the analyses reported here are available in Figure 2—source data 2.
Figure 2—figure supplement 3. Differences in decoding accuracy between high and low vivid participants during perception and imagery.

Figure 2—figure supplement 3.

None of the differences were significant after correction for multiple comparisons. (A) Decoding accuracy from a classifier that was trained and tested on the same time points during perception and (C) during imagery. The red line denotes the accuracy for the high vividness group, the blue line the accuracy for the low vividness group. (B) Difference between participants with high and low vividness on temporal generalization accuracy during perception (B), and (D) imagery. Reddish colors indicate higher accuracy for the high vividness group and blueish colors indicate higher accuracy for the low vividness group. Source data for the analyses reported here are available in Figure 2—source data 3.