Skip to main content
. 2022 Jul 15;2022:6750457. doi: 10.1155/2022/6750457

Table 2.

Performance metrics of the classifiers.

Performance metrics Description of metrics Derived from confusion matrix
Accuracy Average of no of samples identified as positive to no. of samples identified as negative Ac=TN+TPTN+FN+TP+FP
Precision From all correct predictions, accurately predicted. PR=TPTP+FP
MSE Average of the squared error MSE=1Ni=1Nyiyi2
F1 score Mean of precision and recall to get classification accuracy for a specific class F1=2TP2TP+FP+FN
Mathews correlation coefficient Pearson correlation between true and attained output MCCTPTNFPFNTP+FPTN+FPTN+FN
Fowlkes mallows index Measure of similarity between clustering FM=TPTP+FP.TPTP+FN
Error rate Based on the number of observations, the sum of all inaccurate predictions. ErR=FP+FNTP+FN+TN+FP
Jaccard metric The predicted real positives outnumbered the actual positives, whether they happened to be real or predicted. Jac=TPTP+FP+FN
Classification success index Averaging the class-specific symmetric measure of overall class CSI = PPV + SEN − 100