Skip to main content
. 2020 Feb 13;22(2):213. doi: 10.3390/e22020213
Algorithm 1 VIB-GMM algorithm for unsupervised learning.
  • 1:

    input: Dataset D:={xi}i=1N, parameter s0.

  • 2:

    output: Optimal DNN weights θ,ϕ and GMM parameters ψ={πc, μc, Σc}c=1|C|.

  • 3:

    initialization Initialize θ,ϕ,ψ.

  • 4:

    repeat

  • 5:

        Randomly select b mini-batch samples {xi}i=1b from D.

  • 6:

        Draw m random i.i.d samples {zj}j=1m from PZ.

  • 7:

        Compute m samples ui,j=f˜θ(xi,zj)

  • 8:

        For the selected mini-batch, compute gradients of the empirical cost (20).

  • 9:

        Update θ,ϕ,ψ using the estimated gradient (e.g., with SGD or ADAM).

  • 10:

    until convergence of θ,ϕ,ψ.