Table 1.
Quantitative evaluation results of our SRHP algorithm against comparative methods at the patch-level in terms of AUC, Accuracy, Recall, Precision, Specificity, and F1 score. The results are averaged across multiple runs of 10 fold cross validation (The best results are shown in bold).
| Patch-level | AUC | Accuracy | Recall | Precision | Specificity | F1 Score |
|---|---|---|---|---|---|---|
| DLGg | 0.91 | 85.04% | 0.72 | 0.90 | 0.94 | 0.80 |
| SSAE | 0.79 | 72.07% | 0.65 | 0.73 | 0.79 | 0.69 |
| MATF | 0.94 | 86.41% | 0.84 | 0.87 | 0.89 | 0.85 |
| SRHP | 0.96 | 89.02% | 0.94 | 0.84 | 0.84 | 0.89 |