Skip to main content
. 2019 Apr 23;20:306. doi: 10.1186/s12864-019-5654-9

Table 3.

The comparison of different classifiers

Data set Classifier Acc(%) AUC Fscore(%) MCC(%) SP(%) SE(%)
HCCs RF 90.48 0.9438 79.89 73.86 95.43 75.39
GBDT 91.69 0.9538 83.11 77.58 95.11 81.31
XGBoost 90.74 0.9554 82.89 76.73 91.23 88.94
LightGBM 92.06 0.9616 84.66 78.97 93.73 86.84
FCNN 90.27 0.9402 80.16 73.77 94.24 78.14
HepG2 RF 82.46 0.9027 78.98 63.92 84.94 78.93
GBDT 81.80 0.8990 78.34 62.63 83.92 78.80
XGBoost 79.42 0.9131 79.09 62.39 93.14 69.53
LightGBM 83.20 0.9213 81.73 67.36 89.96 78.32
FCNN 80.97 0.8841 76.76 60.70 84.93 75.35

1RF [28] is an ensemble learning model that uses the idea of bagging and the random selection of features to avoid data over-fitting

2GBDT [60] is a non-parallel model that uses the gradient from previous tree as the input for the next tree

3XGBoost [53] is an improved GBDT algorithm. The reference indicator of XGBoost is completely redefined when the tree leaf nodes split

4LightGBM [36] is based on the GBDT algorithm and employs sample selection and feature mergence to reduce the running time

5FCNN represents the Fully Connected Neural Network

6The boldface is the best value in the column