Skip to main content
. 2017 Jul 14;12(7):e0181173. doi: 10.1371/journal.pone.0181173

Table 4. Comparison of the performance of our models with that of LACE, assuming a 25% intervention rate.

Model* # Features Precision Recall AUC Training time** Evaluation time**
2-layer neural network 1667 24% 60% 0.78 2650 sec 154 sec
2-layer neural network 500 22% 61% 0.77 396 31
2-layer neural network 100 22% 58% 0.76 169 14
Random forest 100 23% 57% 0.77 669 43
Logistic regression 1667 17% 41% 0.66 60 4
Logistic regression 100 21% 52% 0.72 17 0.1
LACE 4 21% 49% 0.72*** 0 0.2

*—Model parameters: neural network (as described in Methods section), random forest (1000 trees of max depth 8, with 30% of features in each tree), logistic regression (default parameters in scikit-learn package)

**—Per-fold training time was measured on a 2014 Macbook Pro with a 4-core 2.2 GHz processor and 16GB RAM. The neural network model ran on four cores, while the other models could only be run on a single core. Training was performed on 259,050 records and evaluation was performed on 64,763 records.

***—We computed the AUC for LACE by comparing the performance of LACE models at every possible threshold. However, LACE is normally used with a fixed threshold, so the given AUC overstates the performance of LACE in practice.