Table 1.
Results of linear boundary examples: K represents the number of treatment levels; n represents the training set size; the MISC rows show the means and standard deviations (in parenthesis) of the misclassification rates; and the VMSE rows show the means and standard deviations (in parenthesis) of the value function MSEs. PLS-l1 represents penalized least squares including covariate-treatment interactions with l1 penalty (Qian and Murphy, 2011); OWL represents the outcome weighted learning, and GOWL1 and GOWL2 represent the proposed generalized outcome weighted learning with the first and second data duplication methods. In each scenario, the model producing the best criterion is in bold.
Methods | (K, n) | (2,300) | (3,300) | (5,500) | (7,500) | |
---|---|---|---|---|---|---|
PLS-l1 | MISC | 0.138 (0.010) | 0.271 (0.076) | 0.443 (0.009) | 0.688 (0.153) | |
VMSE | 0.060 (0.016) | 0.134 (0.032) | 0.365 (0.016) | 0.497 (0.281) | ||
| ||||||
OWL-Lin | MISC | 0.089 (0.025) | 0.262 (0.116) | 0.308 (0.031) | 0.392 (0.200) | |
VMSE | 0.149 (0.065) | 0.327 (0.186) | 0.412 (0.133) | 0.304 (0.261) | ||
| ||||||
OWL-Gau | MISC | 0.139 (0.035) | 0.251 (0.070) | 0.371 (0.062) | 0.592 (0.159) | |
VMSE | 0.049 (0.035) | 0.318 (0.153) | 0.419 (0.340) | 0.371 (0.160) | ||
| ||||||
GOWL1-Lin | MISC | 0.070 (0.036) | 0.064 (0.029) | 0.136 (0.038) | 0.226 (0.072) | |
VMSE | 0.014 (0.025) | 0.059 (0.066) | 0.022 (0.009) | 0.063 (0.007) | ||
| ||||||
GOWL1-Gau | MISC | 0.064 | (0.035) | 0.096 (0.037) | 0.154 (0.019) | 0.325 (0.061) |
VMSE | 0.008 | (0.014) | 0.106 (0.057) | 0.020 (0.007) | 0.051 (0.004) | |
| ||||||
GOWL2-Lin | MISC | 0.070 (0.036) | 0.087 (0.043) | 0.146 (0.060) | 0.247 (0.068) | |
VMSE | 0.014 (0.025) | 0.045 (0.030) | 0.028 (0.024) | 0.038 (0.040) | ||
| ||||||
GOWL2-Gau | MISC | 0.064 | (0.035) | 0.123 (0.041) | 0.186 (0.084) | 0.348 (0.057) |
VMSE | 0.008 | (0.014) | 0.102 (0.117) | 0.093 (0.104) | 0.167 (0.174) |