Skip to main content
. Author manuscript; available in PMC: 2019 Jan 6.
Published in final edited form as: Neuroimage. 2018 Jul 25;181:734–747. doi: 10.1016/j.neuroimage.2018.07.047

Figure 1:

Figure 1:

A translation model for learning alignment between functional dFNC states and structural components. The attention network module is a feed forward network (input: 23, hidden: 50, output: 23) with a 50% dropout (Srivastava et al., 2014) in the hidden layer. The sequence predictor module has a recurrent layer (consisting of 50 gated recurrent units) and a feedforward network (input: {50+23 = 73}, hidden: 50, output: 5) with a 50% dropout in the hidden layer. The recurrent layer uses the dFNC correlation matrix as an embedding in the real vector space for the dFNC states.