Skip to main content
. 2018 May 25;8:173. doi: 10.3389/fcimb.2018.00173

Table 1.

Values of the AUROC metric for the best eight individual models and the best 8-model ensemble.

Model Training set Test set DUD-E database
8-MODEL ENSEMBLE (MIN) 0.851 (±0.0281) 0.885 (±0.0367) 0.976*** (±0.0085)
8-MODEL ENSEMBLE (RANKING) 0.886 (±0.0239) 0.878 (±0.0375) 0.976*** (±0.0082)
8-MODEL ENSEMBLE (AVERAGE) 0.891 (±0.0234) 0.887 (±0.0357) 0.970*** (±0.0096)
8-MODEL ENSEMBLE (VOTING) 0.833 (±0.0283) 0.810 (±0.0516) 0.959* (±0.0173)
348 0.882 (±0.0250) 0.885 (±0.0360) 0.934 (±0.0123)
706 0.809* (±0.0319) 0.736** (±0.0565) 0.924 (±0.0205)
981 0.843 (±0.0298) 0.837 (±0.0467) 0.922 (±0.0254)
557 0.778** (±0.0343) 0.818 (±0.0482) 0.919 (±0.0203)
123 0.850 (±0.0285) 0.882 (±0.0382) 0.918 (±0.0185)
693 0.860 (±0.0280) 0.828* (±0.0459) 0.913* (±0.0171)
560 0.775** (±0.0348) 0.779* (±0.0525) 0.911* (±0.0185)
746 0.844 (±0.0288) 0.820 (±0.0456) 0.910 (±0.0195)

AUROCs statistically different from the correspondent column for the best individual model (model 348).

*

p < 0.05,

**

p < 0.01, and

***

p < 0.001.

The highest AUC for an individual model is indicated in bold.