Skip to main content
. Author manuscript; available in PMC: 2017 Jun 1.
Published in final edited form as: Trends Cogn Sci. 2016 Apr 30;20(6):425–443. doi: 10.1016/j.tics.2016.03.014

Figure 2. Multimodal surface matching and hyperalignment.

Figure 2

(a) The Multimodal Surface Matching framework [31] accepts multiple sources of information to align subjects: not only sulcal patterns, but also cortical thickness, myelin maps, RS-fMRI derived functional connectivity, etc. (b) After selecting a region of interest in a common anatomical space, hyperalignment [34] aligns representational spaces across subjects; it is usually based on movie data.