Skip to main content
. Author manuscript; available in PMC: 2019 May 20.
Published in final edited form as: Stat Med. 2019 Feb 11;38(11):2059–2073. doi: 10.1002/sim.8102

TABLE 1.

Simulation results for Model I with the constant propensity score.

Homogeneous Error

Normal
Log-Normal
Cauchy
n method MSE PCD δ0.5 MSE PCD δ0.5 MSE PCD δ0.5



100 LS 1.32 (0.040) 80.7 1.06 2.36 (0.081) 75.7 1.57 58.4 3.75
p(0.5) 1.44 (0.042) 80.1 1.13 1.73 (0.051) 78.0 1.31 2.69 (0.077) 75.2 1.63
p(0.25) 1.90 (0.057) 78.3 1.34 1.63 (0.051) 79.0 1.29 5.29 (0.168) 70.4 2.25
Huber 1.15 (0.034) 81.9 0.93 1.45 (0.044) 79.9 1.13 2.61 (0.072) 74.9 1.66
200 LS 0.68 (0.021) 85.6 0.59 1.10(0.033) 82.0 0.91 58.7 3.70
p(0.5) 0.73 (0.021) 85.3 0.62 0.78 (0.021) 84.1 0.70 1.23 (0.037) 81.3 0.99
p(0.25) 0.92 (0.028) 84.0 0.75 0.70 (0.023) 86.0 0.59 2.48 (0.079) 75.7 1.64
Huber 0.58 (0.017) 86.8 0.50 0.66 (0.018) 85.5 0.58 1.24 (0.035) 80.8 1.03
400 LS 0.33 (0.009) 90.3 0.26 0.56 (0.016) 87.1 0.46 59.2 3.61
p(0.5) 0.35 (0.010) 90.0 0.29 0.37 (0.010) 89.0 0.34 0.56 (0.016) 87.1 0.48
p(0.25) 0.43 (0.013) 89.1 0.34 0.33 (0.010) 90.7 0.25 1.16(0.037) 82.9 0.86
Huber 0.28 (0.008) 91.1 0.22 0.31 (0.009) 90.2 0.27 0.58 (0.017) 86.7 0.49
800 LS 0.17 (0.005) 93.2 0.13 0.26 (0.008) 90.9 0.23 59.4 3.59
p(0.5) 0.17 (0.005) 93.1 0.13 0.19 (0.005) 92.1 0.17 0.29 (0.009) 90.7 0.24
p(0.25) 0.22 (0.007) 92.4 0.16 0.18 (0.006) 93.6 0.12 0.59 (0.019) 87.3 0.48
Huber 0.14 (0.004) 93.8 0.11 0.16 (0.005) 93.1 0.14 0.29 (0.008) 90.5 0.25

Heterogeneous Error

Normal
Log-Normal
Cauchy
n method MSE PCD δ0.5 MSE PCD δ0.5 MSE PCD δ0.5



100 LS 3.24 (0.110) 74.7 1.70 8.98 (0.561) 68.6 2.44 56.2 4.05
p(0.5) 1.70 (0.060) 80.5 1.08 1.80 (0.064) 80.1 1.08 3.45 (0.124) 75.1 1.69
p(0.25) 2.50 (0.085) 77.4 1.42 2.51 (0.079) 76.8 1.46 9.13(0.341) 67.2 2.66
Huber 1.70 (0.057) 80.4 1.10 1.87 (0.063) 79.2 1.16 4.27 (0.155) 72.8 1.93
200 LS 1.54 (0.050) 80.6 1.06 4.71 (0.244) 73.4 1.85 55.2 4.17
p(0.5) 0.78 (0.028) 86.7 0.53 0.90 (0.032) 85.3 0.63 1.49 (0.052) 81.9 0.95
p(0.25) 1.16(0.039) 83.5 0.81 1.23 (0.039) 82.0 0.91 3.95 (0.150) 73.2 1.90
Huber 0.77 (0.025) 86.4 0.55 0.94 (0.032) 84.5 0.69 1.94 (0.071) 79.3 1.19
400 LS 0.80 (0.026) 86.0 0.58 2.69 (0.136) 77.8 1.34 54.7 4.26
p(0.5) 0.39 (0.013) 90.5 0.27 0.44 (0.017) 89.6 0.32 0.71 (0.024) 86.9 0.50
p(0.25) 0.56 (0.019) 88.8 0.37 0.66 (0.020) 86.9 0.50 1.70 (0.055) 79.6 1.17
Huber 0.38 (0.012) 90.4 0.27 0.48 (0.017) 88.8 0.36 0.91 (0.029) 84.9 0.65
800 LS 0.41 (0.013) 89.9 0.29 1.35 (0.150) 83.1 0.82 56.5 4.00
p(0.5) 0.18 (0.006) 93.6 0.12 0.20 (0.007) 92.6 0.16 0.36 (0.013) 91.0 0.25
p(0.25) 0.28 (0.009) 92.2 0.18 0.31 (0.010) 90.8 0.24 0.89 (0.031) 85.8 0.60
Huber 0.19 (0.006) 93.3 0.13 0.22 (0.007) 92.1 0.18 0.47 (0.017) 89.2 0.34

LS stands for lsA-learning. P(0.5) stands for robust regression with pinball loss and parameter τ = 0.5. P(0.25) stands for robust regression with pinball loss and parameter τ = 0.25. Huber stands for robust regression with Huber loss, where parameter α is tuned automatically with R function rlm. Column δ0 5 is multiplied by 10.