Skip to main content
. 2022 Apr 12;23(8):4263. doi: 10.3390/ijms23084263

Table 1.

Performance comparison of various deep learning models on the training dataset P.ELM, ten-fold cross validation was used.

Methods Residue = S
AUC (%) Acc (%) Sn (%) Sp (%) Pre (%) F1 (%) MCC
TransPhos 85.79 78.18 80.56 75.80 76.83 78.65 0.564
CNN 81.56 74.96 77.12 72.80 73.85 75.45 0.500
LSTM 84.20 76.99 79.61 74.37 75.57 77.54 0.541
RNN 82.66 75.18 75.39 74.97 75.00 75.20 0.504
FCNN 82.89 75.05 72.93 77.16 76.08 74.47 0.501
Methods Residue = T
AUC Acc Sn (%) Sp (%) Pre (%) F1 (%) MCC
TransPhos 83.35 75.59 76.54 74.70 74.12 75.31 0.512
CNN 81.99 75.50 74.82 76.16 74.82 74.82 0.510
LSTM 83.91 76.87 76.09 77.62 76.30 76.19 0.537
RNN 79.89 71.72 76.18 67.50 68.93 72.38 0.438
FCNN 80.00 73.48 73.46 73.50 72.41 72.93 0.469
Methods Residue = Y
AUC Acc Sn (%) Sp (%) Pre (%) F1 (%) MCC
TransPhos 69.53 63.62 61.99 65.11 61.99 69.06 0.449
CNN 67.40 64.43 56.17 72.00 64.80 60.18 0.286
LSTM 68.71 63.73 66.10 61.56 61.21 63.56 0.276
RNN 67.84 62.22 75.79 49.78 58.07 65.76 0.264
FCNN 69.55 64.31 61.02 67.33 63.16 62.07 0.284

Accuracy (Acc), Sensitivity (Sn), Specificity (Sp), Precision (Pre), F1 Score (F1) and Matthews correlation coefficient (MCC) were calculated to measure the performance of models. Data in bold indicates that the model performs best for that evaluation metric.