Skip to main content
. 2025 Jun 25;27(7):678. doi: 10.3390/e27070678
Algorithm 1: HPO EM algorithm for synthetic data
1. Generate randomly hyperparameters value η, λ, μ and the dataset X=x1,,xN
2. Choose a suitable linear basis function ψ(xn) to get an N×M matrix Ψ by using (12).
3. Generate parameters value ω randomly sampled by (2).
4. Generate N-dimensional vector ε sampled randomly by the Gaussian distribution N(0,λ1) and then generate T=(t1,,tN) by (4) and (5), respectively.
5. E step. Compute the mean m and covariance C using the current hyperparameter values.
m=(Λ+λΨTΨ)1(Λμ+λΨTT), (49)
C=(Λ+λΨTΨ)1. (50)
6. M step. Estimate again the hyperparameters by employing the mean m and covariance C obtained by step 5 and the following update equations
ηinew=1mi2+Cii, (51)
λnew=NTΨm2+Tr(ΨTΨC), (52)
μnew=m. (53)
7. Compute the likelihood function or the log likelihood function given by the following result:
p(T|X,η,λ,μ)=λ2πNΛ12C12exp(G(m))
or
lnp(T|X,η,λ,μ)=N2lnλG(m)+12i=1Mlnηi+12lnCN2ln(2π)
and then determine the convergence of the hyperparameters or the likelihood. If convergence is not satisfied, back to step 5. If the likelihood or the log likelihood converges, then the algorithm’s computational complexity is O(M).