Skip to main content
. 2023 Nov 22;14:1250806. doi: 10.3389/fmicb.2023.1250806

Table 1.

Commonly used metrics to assess the performance and effectiveness of machine learning models.

Metric Definition
Accuracy Measures the overall correctness of the predictions made by a model. It is the ratio of the correctly predicted instances to the total number of instances in the dataset.
Sensitivity (Recall or true positive rate) Quantifies the proportion of actual positive instances that are correctly identified as positive by the model. It is the ratio of true positive predictions to the sum of true positives and false negatives.
Specificity Represents the ability of a model to identify negative instances correctly. It is the ratio of true negative predictions to the sum of true negatives and false positives.
Precision Indicates the proportion of correctly predicted positive instances out of the total instances predicted as positive by the model.
F1 score Is the harmonic mean of precision and sensitivity and provides a balanced evaluation of a model’s performance.
AUC (Area Under the ROC Curve) The ROC curve plots the true positive rate against the false positive rate at various classification thresholds. AUC represents the area under this curve and is a measure of the model’s ability to discriminate between positive and negative instances.