Require: input carriers g, choices y
|
Require: initial hyperparameters θ0, subset of hyperparameters to be optimized θOPT
|
1: repeat
|
2: Optimize for w given current θ → wMAP, Hessian of log-posterior Hθ, log-evidence E
|
3: Determine Gaussian prior and Laplace appx. posterior
|
4: Calculate Gaussian approximation to likelihood using product identity, where and wL
= −ΓHθwMAP
|
5: Optimize E w.r.t. θOPT using closed form update (with sparse operations)
|
6: Update best θ and corresponding best E
|
7: until
θ converges |
8: return wMAP and θ with best E
|