Skip to main content
. Author manuscript; available in PMC: 2024 Aug 7.
Published in final edited form as: Nat Neurosci. 2023 May 1;26(5):858–866. doi: 10.1038/s41593-023-01304-9

Extended Data Fig. 2. Perceived and imagined speech identification performance.

Extended Data Fig. 2

Language decoders were trained for subjects S1 and S2 on fMRI responses recorded while the subjects listened to narrative stories. (a) The decoders were evaluated on single-trial fMRI responses recorded while the subjects listened to the perceived speech test story. The color at (i,j) reflects the BERTScore similarity between the ith second of the decoder prediction and the jth second of the actual stimulus. Identification accuracy was significantly higher than expected by chance (p<0.05, one-sided permutation test). Corresponding results for subject S3 are shown in Figure 1f in the main text. (b) The decoders were evaluated on single-trial fMRI responses recorded while the subjects imagined telling five 1-minute test stories twice. Decoder predictions were compared to reference transcripts that were separately recorded from the same subjects. Each row corresponds to a scan, and the colors reflect the similarities between the decoder prediction and all five reference transcripts. For each scan, the decoder prediction was most similar to the reference transcript of the correct story (100% identification accuracy). Corresponding results for subject S3 are shown in Figure 3a in the main text.