Skip to main content
. 2019 Mar 8;14(3):e0213245. doi: 10.1371/journal.pone.0213245

Table 3. Used models and tuning parameter values used for the binary outcome data.

Parameter 1 Parameter 2 Parameter 3 Parameter 4
Linear logistic regression [57] - - - -
Linear discriminant analysis [58] - - - -
L1-logistic regression [55, 5960] λ1 = 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.5, 1, 10, 20, 30, 50, 70, 100, 500, 1,000 - - -
L2-logistic regression [55, 6162] λ2 = 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.5, 1, 10, 20, 30, 50, 70, 100, 500, 1,000 - - -
Penalized discriminant analysis [63] λ = 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.5, 1, 10, 20, 30, 50, 70, 100, 500, 1,000 - - -
Random forest [33] Ntrees: 1,000 Npredictors: 2, 3, 4, 5, 6, 7, 9, 11, 13, 16 Node size = 1 -
Stochastic gradient boosting [34] Max. Ntrees: 1,000 Interaction depth: 2, 3, 4, 5, 6, 7, 8, 9 Bag fraction = 0.5 ν = 0.01
BART [6465] Ntrees: 200 k = 2.0 Niter:1,000 Number of burn-in iterations: 100

Models were tried out both with standardized and unstandardized input data.