Skip to main content
. 2022 Mar 7;25(3):369–380. doi: 10.1038/s41593-022-01026-4

Extended Data Fig. 9. Figure S9. Comparison of GPT-2 and concatenation of static embeddings.

Extended Data Fig. 9

The increased performance of GPT-2 based contextual embeddings encoding may be attributed to the fact that it consists of information about the previous words’ identity. To examine this possibility, we concatenated the GloVe embeddings of the 10 previous words and current word, and reduced their dimensionality to 50 features. GPT-2 based encoding outperformed mere concatenation before word onset, suggesting that GPT-2’s ability to compress the contextual information improves the ability to model the neural signals before word onset. The error bars indicate the standard error of the encoding models across electrodes.