Skip to main content
. 2022 Nov 30:1–27. Online ahead of print. doi: 10.1007/s10844-022-00764-y

Table 5.

Performance measures of various text models

Text model Validation accuracy Test accuracy
BERT (Baseline) 86.54% 86.44%
InferSent (Baseline) 86.34% 86.31%
LSTM+CNN 85.51% 85.25%
BiGRU+Capsule 85.98% 86.18%
BiLSTM+BiGRU+attention 87.89% 87.90%
2D CNN 86.52% 86.77%
BERT+Dense 89.34% 89.46%
RoBERTa+Dense 88.52% 88.62%

Bold indicates models with better performance measures (here validation and Test accuracy)