Algorithm 1.
Bayesian Switching Factor Analysis
step 1: initialization |
• set the number of states, K (the initial value of K is usually set to a large value, and during learning those states with small contributions will get weights close to zero); |
• set the intrinsic dimensionality of the latent subspace, P, (in general P is set to be smaller than data dimension, P < D, however, in a fully noninformative initialization, one can simply set P = D − 1);, |
• set the prior distribution parameters (Appendix A); |
• initialize (e.g., using K-means algorithm, with Euclidean distance as the similarity measure); |
repeat |
step 2: optimization of the model parameters (variational-maximization step) |
• update q(ϕ) and using update Equations (B.4), (B.5) and (B.11); |
• update q(θ) using update Equations (B.2), (B.3); |
step 3: optimization of the latent state variables (variational-expectation step) |
• update using update Equation (D.7); |
• update using update Equation (D.8); |
step 4: optimization of the posterior hyperparameters |
• update posterior hyperparameters using Equations (E.2)–(E.6); |
step 5: check for convergence |
• evaluate lower bound, ℒ, from Equation (10) using Equation (F.1). |
until convergence (ℒiter − ℒiter − 1 < thr) |