Skip to main content
. 2022 Mar 21;129:104054. doi: 10.1016/j.jbi.2022.104054

Table 3.

Classification performance.

Without sample balance
With sample balance
Class 1 Class − 1 Class 0 Class 1 Class − 1 Class 0
Precision
Textblob 0.33 0.15 0.60
Vader 0.33 0.19 0.58
TF-IDF + DT 0.46 0.29 0.68 0.41 0.26 0.67
TF-IDF + RF 0.91 0.91 0.66 0.70 0.57 0.79
TF-IDF + NB 0.69 0.58 0.70 0.58 0.35 0.84
TF-IDF + SVM 0.61 0.49 0.76 0.56 0.40 0.81
TF-IDF + LR 0.68 0.58 0.75 0.58 0.41 0.81
FastText + LSTM 0.65 0.43 0.68 0.44 0.26 0.78
GloVe + LSTM 0.56 0.40 0.72 0.40 0.32 0.75



Recall
Textblob 0.57 0.26 0.31
Vader 0.52 0.52 0.24
TF-IDF + DT 0.36 0.21 0.78 0.39 0.31 0.66
TF-IDF + RF 0.24 0.04 0.99 0.61 0.49 0.85
TF-IDF + NB 0.43 0.11 0.92 0.63 0.73 0.63
TF-IDF + SVM 0.56 0.38 0.82 0.62 0.66 0.67
TF-IDF + LR 0.54 0.34 0.88 0.61 0.64 0.71
FastText + LSTM 0.33 0.21 0.90 0.55 0.40 0.61
GloVe + LSTM 0.43 0.35 0.82 0.58 0.34 0.58



F1-score
Textblob 0.41 0.19 0.41
Vader 0.40 0.28 0.34
TF-IDF + DT 0.40 0.25 0.72 0.40 0.28 0.66
TF-IDF + RF 0.38 0.08 0.79 0.65 0.52 0.82
TF-IDF + NB 0.53 0.19 0.79 0.61 0.47 0.72
TF-IDF + SVM 0.59 0.43 0.79 0.59 0.50 0.73
TF-IDF + LR 0.60 0.42 0.81 0.60 0.50 0.75
FastText + LSTM 0.44 0.28 0.78 0.49 0.31 0.68
GloVe + LSTM 0.50 0.37 0.76 0.47 0.33 0.65



Accuracy Training Testing Training Testing
Textblob 37.4%
Vader 34.9%
TF-IDF + DT 94.6% 59.6% 94.5% 54.3%
TF-IDF + RF 92.8% 67.6% 99.2% 74.4%
TF-IDF + NB 83.9% 69.1% 85.6% 64.4%
TF-IDF + SVM 89.0% 69.7% 92.9% 65.5%
TF-IDF + LR 85.6% 72.1% 94.6% 67.4%
FastText + LSTM 94.2% 66.3% 86.9% 56.8%
GloVe + LSTM 89.4% 65.9% 92.2% 55.3%