Table 3.
The micro-average performances for concept extraction and relation identification.a
| Models | Subtask 1 (concept extraction) | Subtask 2 (relation identification) | ||||||
| Precision | Recall | F1 score | Precision | Recall | F1 score |
|
||
| LSTMa-CRFsb + BERTc-cls + BERT-rel | 0.7760 | 0.8087 | 0.7920 | 0.7343 | 0.5465 | 0.6266 |
|
|
| LSTM-CRFs-EN + BERT-cls + BERT-reld | 0.7969 | 0.7920 | 0.7944 | 0.6995 | 0.6184 | 0.6544 |
|
|
| BERT-ner + BERT-cls + BERT-rel | 0.8060 | 0.8105 | 0.8083 | 0.7140 | 0.6252e | 0.6667 |
|
|
| BERT-ner-EN + BERT-cls + BERT-rel | 0.8301e | 0.8198e | 0.8249e | 0.7421e | 0.6233 | 0.6775e |
|
|
aLSTM: long short-term memory.
bCRFs: conditional random fields.
cBERT: bidirectional encoder representations from transformers.
dOur best system developed during the challenge.
eThe best performances.