Table 1. Performance measures for the testing set data (n = 423).
For all measures, higher values represent better model predictions. Balanced accuracy considers the uneven balance of classes (‘temperate’ and ‘virulent’) and was calculated with ‘adjusted = True’ such that random guessing would have a score of 0 and a perfect model would have a score of 1. ‘MCC’ stands for Matthew’s correlation coefficient.
| BACPHLIP | Mavrich | PHACTS | |
|---|---|---|---|
| Accuracy | 0.983 | 0.955 | 0.79 |
| Balanced accuracy | 0.97 | 0.917 | 0.528 |
| MCC | 0.967 | 0.911 | 0.586 |
| F1-score | 0.985 | 0.939 | 0.837 |