Table 3.
Metric scores of eight individual classifiers as well as soft voting classifier consisting of random forest, logistic regression, K-nearest neighbors, and multilayer perceptron trained with calcium transient features. The best-performing classifier is highlighted in red.
| ML classifiers | Sensitivity | Specificity | Precision | Accuracy | F1-score |
|---|---|---|---|---|---|
| Random Forest | 0.8519 | 0.8226 | 0.807 | 0.8362 | 0.8288 |
| Support vector machine | 0.8214 | 0.8167 | 0.807 | 0.819 | 0.8142 |
| K-nearest neighbors | 0.8182 | 0.8033 | 0.7895 | 0.8103 | 0.8036 |
| Decision tree | 0.7719 | 0.7797 | 0.7719 | 0.7759 | 0.7719 |
| Logistic regression | 0.8750 | 0.8667 | 0.8596 | 0.8707 | 0.8673 |
| Adaptive boosting | 0.7759 | 0.7931 | 0.7895 | 0.7845 | 0.7826 |
| Extreme gradient boosting | 0.7544 | 0.7627 | 0.7544 | 0.7586 | 0.7544 |
| Multilayer perceptron | 0.8596 | 0.8644 | 0.8596 | 0.8621 | 0.8596 |
| Soft voting | 0.8909 | 0.8689 | 0.8596 | 0.8793 | 0.8750 |