Table 18. Performance of BETO with the full corpus evaluated for multi-label sentiment classification.
Precision | Recall | F1-score | |
---|---|---|---|
MET | |||
Positive | 0.8806 | 0.8824 | 0.8815 |
Neutral | 0.7000 | 0.4242 | 0.5283 |
Negative | 0.7555 | 0.7723 | 0.7638 |
Other companies | |||
Positive | 0.6301 | 0.5542 | 0.5897 |
Neutral | 0.8187 | 0.8238 | 0.8212 |
Negative | 0.6174 | 0.6283 | 0.6228 |
Society | |||
Positive | 0.7442 | 0.7249 | 0.7344 |
Neutral | 0.7314 | 0.7089 | 0.7200 |
Negative | 0.7267 | 0.7530 | 0.7396 |
Combined | |||
Micro avg | 0.7751 | 0.7640 | 0.7695 |
Macro avg | 0.7339 | 0.6969 | 0.7113 |
Weighted avg | 0.7734 | 0.7640 | 0.7680 |