Table 3.
Model | AUC | Accuracy | Specificity | Sensitivity |
---|---|---|---|---|
Final component architecture | ||||
FC layer | 0.9* [0.82–0.95] | 0.89 | 0.9 | 0.8 |
1DCNN | 0.91* [0.82–0.95] | 0.89 | 0.89 | 0.9 |
XGBoost | 0.92* [0.84–0.97] | 0.88 | 0.88 | 0.9 |
Dimensionality reduction | ||||
w/o | 0.92* [0.87–0.97] | 0.85 | 0.84 | 0.9 |
Sparse PCA | 0.94 [0.88–0.98] | 0.89 | 0.89 | 0.9 |
NMF | 0.94 [0.88–0.98] | 0.85 | 0.84 | 0.9 |
Modified LLE | 0.96 [0.92–1.0] | 0.9 | 0.92 | 0.8 |
Spectral embedding | 0.93* [0.89–0.97] | 0.85 | 0.84 | 0.9 |
BiAttention | 0.95 [0.9–0.99] | 0.9 | 0.9 | 0.9 |
Embedding dimension | ||||
d = 1024 | 0.95 [0.9–0.98] | 0.89 | 0.89 | 0.9 |
d = 256 | 0.96 [0.93–1.0] | 0.92 | 0.92 | 0.9 |
Full |
Asterisks mark statistically significant difference compared to the full intermediate model architecture (Kolmogorov-Smirnov test with ).
Best performing values are in [bold].