Nearest neighbor |
The model consists of 83 classifiers using the IBk algorithm, where instances are encoded by sequence properties. |
Hu et al. [58] |
Training the IBk classifier through the training dataset to obtain several better random projections and then applying them to the test dataset. |
Jiang et al. [16] |
Support vector machine |
The decision tree is used to perform feature selection and the SVM is applied to create a predictive model. |
Cho et al. [30] |
F-score is used to remove redundant and irrelevant features, and SVM is used to train the model. |
Xia et al. [28] |
Proposed two new models of KFC through SVM training |
Darnell et al. [31] |
The two-step feature selection method is used to select 38 optimal features, and then the SVM method is used to establish the prediction model. |
Deng et al. [11] |
The random forest algorithm is used to select the optimal 58 features, and then the SVM algorithm is used to train the model. |
Ye et al. [59] |
Use the two-step selection method to select the two best features, and then use the SVM algorithm to build the classifier. |
Xia et al. [3] |
When the interface area is unknown, it is also very effective to use this method. |
Qian et al. [48] |
Decision trees |
Formed by a combination of two decision tree models, K-FADE and K-CON. |
Darnell et al. [31] |
Bayesian networks |
Can handle some of the missing protein data, as well as unreliable conditions. |
Assi et al. [65] |
Neural networks |
Does not need to know the interacting partner. |
Ofran and Rost [66] |
Ensemble learning |
The mRMR algorithm is used to select features, SMOTE is used to handle the unbalanced data, and finally AdaBoost is used to make prediction. |
Huang and Zhang [72] |
Random forest (RF) is used to effectively integrate hybrid features. |
Wang et al. [71] |
Bootstrap resampling approaches and decision fusion techniques are used to train and integrate sub-classifiers. |
Deng et al. [11] |