Skip to main content
. 2022 Apr 12;23(8):4263. doi: 10.3390/ijms23084263

Table 2.

Performance comparison of various deep learning models on the training dataset P.ELM, 10-fold cross validation was used.

Methods Residue = S
AUC (%) Acc (%) Sn (%) Sp (%) Pre (%) F1 (%) MCC
TransPhos 78.67 71.53 67.16 75.89 73.59 70.23 0.432
CNN 74.34 68.40 61.14 75.65 71.52 65.93 0.372
LSTM 77.04 70.48 65.01 75.95 72.99 68.77 0.412
RNN 75.53 68.84 61.44 76.24 72.11 66.35 0.381
FCNN 75.30 69.14 60.68 77.61 73.04 66.29 0.388
Methods Residue = T
AUC Acc Sn (%) Sp (%) Pre (%) F1 (%) MCC
TransPhos 67.19 61.77 47.32 76.22 66.56 55.32 0.246
CNN 64.44 59.19 42.03 76.34 63.98 50.74 0.196
LSTM 66.59 60.64 41.85 79.43 67.05 51.54 0.230
RNN 66.03 61.21 48.57 73.84 65.00 55.60 0.232
FCNN 63.94 59.63 45.30 73.96 63.50 52.88 0.201
Methods Residue = Y
AUC Acc Sn (%) Sp (%) Pre (%) F1 (%) MCC
TransPhos 60.09 55.41 38.52 72.30 58.17 46.35 0.115
CNN 59.11 54.59 34.81 74.37 57.60 43.40 0.100
LSTM 59.49 55.56 40.74 70.37 57.89 47.83 0.116
RNN 61.71 59.48 58.96 60.00 59.58 59.27 0.190
FCNN 59.30 56.44 43.26 69.63 58.75 49.83 0.134

Accuracy (Acc), Sensitivity (Sn), Specificity (Sp), Precision (Pre), F1 Score (F1) and Matthews correlation coefficient (MCC) were calculated to measure the performance of models. Data in bold indicates that the model performs best for that evaluation metric.