Table 2.
Performance metrics of the classifiers.
| Performance metrics | Description of metrics | Derived from confusion matrix |
|---|---|---|
| Accuracy | Average of no of samples identified as positive to no. of samples identified as negative | |
| Precision | From all correct predictions, accurately predicted. | |
| MSE | Average of the squared error | |
| F1 score | Mean of precision and recall to get classification accuracy for a specific class | |
| Mathews correlation coefficient | Pearson correlation between true and attained output | |
| Fowlkes mallows index | Measure of similarity between clustering | |
| Error rate | Based on the number of observations, the sum of all inaccurate predictions. | |
| Jaccard metric | The predicted real positives outnumbered the actual positives, whether they happened to be real or predicted. | |
| Classification success index | Averaging the class-specific symmetric measure of overall class | CSI = PPV + SEN − 100 |