Table 3.
Prediction performance of various model architectures evaluated based on testing dataset and all datasets
| Measures | Testing dataset |
All datasets |
||||
|---|---|---|---|---|---|---|
| CNN | BGRU | DeepSec | CNN | BGRU | DeepSec | |
| Accuracy | 0.806068 | 0.855145 | 0.871139 | 0.825292 | 0.811404 | 0.871345 |
| Recall | 0.782884 | 0.911252 | 0.872120 | 0.777778 | 0.931287 | 0.887427 |
| Precision | 0.875502 | 0.687112 | 0.868200 | 0.872807 | 0.691520 | 0.855263 |
| F-measure | 0.858212 | 0.904143 | 0.910294 | 0.816577 | 0.831593 | 0.873381 |
| MCC | 0.587026 | 0.608209 | 0.691481 | 0.653542 | 0.64152 | 0.743075 |
| AUC | 0.912392 | 0.897955 | 0.940572 | 0.906867 | 0.903960 | 0.941104 |
Note: The highest scores are in bold.