Table 22. Performance analysis of deep learning models.
Model | Accuracy | Class | Precision | Recall | F1 Score |
---|---|---|---|---|---|
LSTM | 0.80 | Neg. | 0.83 | 0.79 | 0.81 |
Pos. | 0.93 | 0.93 | 0.94 | ||
Avg. | 0.88 | 0.87 | 0.87 | ||
CNN-LSTM | 0.90 | Neg. | 0.78 | 0.88 | 0.83 |
Pos. | 0.96 | 0.91 | 0.93 | ||
Avg. | 0.87 | 0.90 | 0.88 | ||
GRU | 0.86 | Neg. | 0.84 | 0.88 | 0.86 |
Pos. | 0.88 | 0.83 | 0.85 | ||
Avg. | 0.86 | 0.86 | 0.86 |