Skip to main content
. 2021 Dec 27;2021:8522839. doi: 10.1155/2021/8522839

Table 5.

Comparison of all the experiments.

Experiment Process Output
CNN and FastText embedding CNN-based processing Accuracy: 71.89%; precision: 0.88; recall: 0.72; F1-score: 0.77
Bidirectional LSTM with FastText embedding Bidirectional GRU or LSTM with global attention Accuracy: 84.33%; precision: 0.91; recall: 0.84; F1-score: 0.87
USE model USE pretrained model with TF 1.0 Accuracy: 92.61%; precision: 0.95; recall: 0.93; F1-score: 0.93
NNLM NNLM-based sentence encoder, with pretrained model Accuracy: 90.16%; precision: 0.81; recall: 0.90; F1-score: 0.86
BERT BERT tokenization and TF Keras modeling Accuracy: 91.39%; precision: 0.92; recall: 0.91; F1-score: 0.88
DistilBERT DistilBERT-based preprocessing of data Accuracy: 94.77%; precision: 0.95; recall: 0.95; F1-score: 0.94
BERT Data preprocessing and tokenization with BERT Accuracy: 97.44%