Skip to main content
. 2020 May 11;11:364. doi: 10.3389/fneur.2020.00364

Table 3.

Performance summary.

Models Train Test (cross-validation)
Metric AUC (95% CI) SN SP Opt.Thr AUC (95% CI) ACC Kappa SN SP
EARLY PD VS. HC
GLM 0.920 (0.888-0.953) 0.912 0.812 0.462 0.907 (0.849-0.964) 0.898 0.764 0.909 0.872
GAM 0.946 (0.922-0.970) 0.923 0.850 0.534 0.928 (0.878-0.978) 0.898 0.768 0.898 0.897
Treea 0.872 (0.831-0.913) 0.857 0.879 0.586 0.860 (0.799-0.922) 0.842 0.659 0.818 0.897
RFa 0.999 (0.999-1.00) 0.990 1.00 0.534 0.913 (0.858-0.968) 0.898 0.764 0.909 0.872
XGBa 0.958 (0.937-0.979) 0.898 0.901 0.660 0.923 (0.875-0.972) 0.882 0.736 0.875 0.897
EARLY PD VS. SWEDD
GLMb 0.938 (0.863-0.972) 0.909 0.841 0.504 0.779 (0.677-0.880) 0.744 0.265 0.667 0.755
GAMb 0.955 (0.916-0.994) 0.886 0.909 0.437 0.787 (0.689-0.886) 0.756 0.299 0.714 0.762
Treea, b 0.932 (0.894-0.971) 0.864 0.920 0.486 0.743 (0.617-0.869) 0.798 0.343 0.667 0.816
RFa, b 1.00 (1.00-1.00) 1.00 1.00 0.461 0.822 (0.746-0.899) 0.732 0.302 0.809 0.721
XGBa, b 0.997 (0.993-1.00) 0.977 0.954 0.542 0.863 (0.777-0.948) 0.768 0.381 0.905 0.748

Superscript a, 10-fold, 5 repeats resampling of the model tuning parameter(s), whereby the optimal hyper-parameter setting was determined by the AUC; ACC, accuracy; superscript b, synthetic minority oversampling technique (SMOTE); AUC, receiver operating characteristic area under the curve; CI, DeLong confidence interval; Kappa, Cohen's Kappa; SP, specificity; SN, sensitivity; GAM, general additive model; GLM, logistic regression generalized linear model; RF, random forest; Tree, decision tree; XGBoost, Extreme gradient boosting; thr, threshold; Bold model names, highest cross-validated AUC.