Table 1.
model | CCR | k | SE | SP | coverage |
---|---|---|---|---|---|
Morgan–RF | 0.85 | 0.71 | 0.85 | 0.86 | 0.62 |
MACCS–RF | 0.83 | 0.66 | 0.83 | 0.83 | 0.67 |
AtomPair–SVM | 0.81 | 0.62 | 0.81 | 0.81 | 0.65 |
AtomPair–GBM | 0.81 | 0.62 | 0.81 | 0.81 | 0.65 |
Dragon–SVM | 0.85 | 0.70 | 0.85 | 0.84 | 0.69 |
Dragon–GBM | 0.85 | 0.70 | 0.85 | 0.84 | 0.69 |
CDK–SVM | 0.84 | 0.69 | 0.85 | 0.84 | 0.77 |
consensus | 0.87 | 0.74 | 0.87 | 0.88 | 1.00 |
consensus rigor | 0.91 | 0.81 | 0.96 | 0.87 | 0.38 |
RF, random forest; SVM, support vector machine; GBM, gradient boosting machine; CCR, correct classification rate; k, Cohen’s κ coefficient; SE, sensitivity; SP, specificity. Consensus and consensus rigor models were built by averaging the predicted values from the individual model for each machine learning technique (Morgan–RF, MACCS–RF, AtomPair–SVM, Dragon–SVM, and CDK–SVM).