Table 4.
Performance comparison of ensemble models on the Yidu-S4K and self-annotated data sets.
| Data set and model | Precision (%) | Recall (%) | F1-score (%) | ||||
| Yidu-S4K | |||||||
|
|
BiLSTM-CRFa [64] | 69.43 | 72.58 | 70.97 | |||
|
|
ACNNb [69] | 83.07 | 87.29 | 85.13 | |||
|
|
ELMoc-lattice-LSTM-CRF [70] | 84.69 | 85.35 | 85.02 | |||
|
|
ELMo-BiLSTM-CRF [41] | —d | — | 85.16 | |||
|
|
ELMo-ETe-CRF [71] | 82.08 | 86.12 | 85.59 | |||
|
|
MSD_DT_NERf [72] | 86.09 | 87.29 | 86.69 | |||
|
|
Our model | 90.37 | 88.22 | 89.28 | |||
| Self-annotated | |||||||
|
|
BiLSTM-CRF | 81.98 | 77.10 | 79.47 | |||
|
|
Our model | 84.24 | 84.99 | 84.61 | |||
aBiLSTM-CRF: Bidirectional Long Short-Term Memory-conditional random fields.
bACNN: all convolutional neural network.
cELMo: Embeddings from Language Models.
dNot available.
eET: encoder from transformer.
fMSD_DT_NER: multigranularity semantic dictionary and multimodal named entity recognition.