Table 2.
Performances of different models with bootstrapping.
| Model | Internal, RMSEa (95% CI) | External, RMSE (95% CI) | Internal, MAEb (95% CI) | External, MAE (95% CI) | Internal, R2 (95% CI) | External, R2 (95% CI) |
| PPKc | 10.38 (7.38 to 13.42) | 20.43 (18.15 to 22.64) | 6.68 (5.29 to 8.45) | 11.87 (11.05 to 12.75) | −0.02 (−0.44 to 0.22) | −3.64 (−5.16 to −2.48) |
| XGBoostd | 9.58 (6.31 to 12.6) | 11.59 (10.88 to 12.17) | 5.75 (4.37 to 7.48) | 8.75 (8.34 to 9.13) | 0.13 (−0.63 to 0.48) | −0.49 (−0.81 to −0.24) |
| TabNet | 8.81 (6.33 to 11.29) | 13.89 (11.01 to 17.71) | 5.85 (4.53 to 7.25) | 7.50 (6.90 to 8.13) | 0.26 (−0.15 to 0.51) | −1.15 (−2.53 to −0.38) |
| 300-layer MLPe | 10.17 (7.06 to 13.09) | 9.94 (8.84 to 11.04) | 6.98 (5.61 to 8.63) | 7.45 (6.55 to 7.28) | 0.021 (–0.086 to 0.056) | –0.098 (–0.26 to –0.023) |
| JointMLPf (proposed) | 8.27 (5.33 to 11.19) | 9.50 (8.72 to 10.30) | 5.11 (3.92 to 6.58) | 6.56 (6.18 to 6.90) | 0.35 (−0.03 to 0.59) | −0.005 (−0.17 to 0.13) |
aRMSE: root mean squared error.
bMAE: mean absolute error.
cPPK: population pharmacokinetic.
dXGBoost: extreme gradient boosting.
eMLP: multilayer perceptron.
fJointMLP: joint multilayer perceptron.