Skip to main content
. 2017 Jan 18;10:145. doi: 10.3389/fncom.2016.00145

Figure 4.

Figure 4

Synchronous activity in primary visual (Left) and primary motor (Right) areas induced by simulated recognition of spoken words grounded in the context of visual perception (Top) and action execution (Bottom). Coherence coefficients between oscillatory responses in area M1i (where CA-circuit parts conveying model correlates of “articulatory” information are reactivated) and primary visual (V1, Left) and motor (M1L, Right) areas (where simulated “perception” and “action” patterns of activation, respectively, are stored) during presentation of object- and action-related words to area A1 are plotted for the different frequency bands as a function of time. The synchronous activity reflects the periodic spreading of activity waves within stimulus-specific CA circuits (see Figure 2, top plot), which link up phonological patterns in “auditory-articulatory” areas (A1, M1i) with “semantic” information coming from the model's sensory (V1) or motor (M1L) systems. Note the clear double dissociation, whereby “articulatory” areas show a high degree of synchronization with “visual”—but not with “motor”—areas during presentation of words with object-related meaning (Top diagrams) and action-related words exhibit the opposite pattern (Bottom diagrams), mirroring the spectral data shown in Figure 3B.