Table 1.
Description of the key properties of baseline model.
| Structure Learning | Description | |
|---|---|---|
| Score-based | BIC | Overfitting can be avoided by selecting models based on their log-likelihood and a penalty term for model complexity |
| BDeu | It is assumed that the prior of the parameters obeying Dirichlet distribution, and it is used as the basis to find the optimal structure, which does not need to obtain the node order in advance. The key properties of BDeu arise from its uniform prior over the parameters of each local distribution in the network, which makes structure learning computationally efficient; it does not require the elicitation of prior knowledge from experts; and it satisfies score equivalence. | |
| BDla | BDla can avoid the need to set free parameters and produce better results when the parameter space of the model is complex (a mix of uniform and skewed parameter distributions). | |
| Hybrid-based | MMHC | MMHC is capable of dealing with thousands of nodes in reasonable time. Firstly, the MMPC (max–min parents and children) algorithm was used to determine the parent and child node sets of each node, so as to construct the network structure framework. Then the frame of the obtained network structure is searched and scored according to the K2 search strategy to obtain the optimal network structure. |
| H2PC | It first reconstructs the skeleton of a Bayesian network by integrating multiple PC algorithms to identify the parents and children set of each variable, and then performs a greedy hill-climbing search to filter and orient the edges. | |