Table 1.
Comparison of fit measures for models with a different number of classes
| Full data (N = 90) |
Outliers removed (N = 85) |
|||||
|---|---|---|---|---|---|---|
| Measure | 1 class | 2 classes | 3 classes | 1 class | 2 classes | 3 classes |
| Log likelihood | −403.57 | −341.51 | −312.67 | −333.26 | −280.93 | −254.98 |
| Relative entropy | — | .732 | .756 | — | .786 | .795 |
| BIC | 866 | 805 | 810 | 724 | 682 | 692 |
| SABIC | 825 | 719 | 680 | 683 | 597 | 563 |
| DBIC | 842 | 755 | 734 | 700 | 632 | 617 |
| HQ-AIC | 846 | 764 | 749 | 699 | 642 | 633 |
| HT-AIC | 839 | 764 | 784 | 705 | 645 | 678 |
| CLC | – | 716 | 674 | – | 587 | 548 |
Lower values of information criteria indicate better fit. Relative entropy and CLC require multiple classes to be computed and are undefined for the 1-class model
BIC Bayesian Information Criteria, SABIC Sample Size Adjusted BIC, DBIC Draper BIC, HQ-AIC Hannan-Quinn Akaike Information Criteria, HT-AIC Hurvich-Tsai Akaike Information Criteria, CLC Classification Likelihood Criteria