Skip to main content
. Author manuscript; available in PMC: 2022 Nov 14.
Published in final edited form as: Neuroimage. 2021 Nov 22;245:118750. doi: 10.1016/j.neuroimage.2021.118750

Table 1.

Algorithm 1: Training GATE model using gradients.

Input: {Ai}i=1n, {zi}i=1n, geometric matrix B, latent space dimension R.

Randomly initialize θ, ϕ
while not converged do
 Sample a batch of {Ai} with mini-batch size m, denote as 𝓐m.
  for all Ai𝓐m do
   Sample εiN(0,IR), and computezi=μϕ(Ai)+εiΣϕ(Ai).
   Compute the gradients θ𝓛˜(Ai;θ,ϕ) and ϕ𝓛˜(Ai;θ,ϕ) with zi.
  Average the gradients across the batch.
 Update θ, ϕ using gradients of θ, ϕ.
Return θ, ϕ.