|
Algorithm 3. Expectation maximization algorithm for BN learning |
-
Step 0:
Initiate μ to μ0 and set μt = μ0 and then continue;
-
Step 1:
To compute μt+1 once μt is known, complete Step 2 and Step 3;
-
Step 2:
Expectation step: compute the data set based on μt:
-
a)
Compute the conditional probability distribution of missing values v* using the Bayesian formula as follows:
, where v is the set of observed values.
-
b)
Obtain the fractional value by assigning a weight, given as , to the missing values v*, and add the value into the incomplete data set to construct the completed data set.
-
Step 3:
Maximization Step: To obtain the MLE of the model parameters μt+1, compute the set of parameters that maximize the likelihood of the completed data set acquired in Step 2b.
-
Step 4:
If convergence is obtained, the algorithm stops; if not, make t = t+1 and μt = μt+1, and return to Step 1.
|