TABLE IX. Comparison of Transformer-Based LMs.
| Fine-tuning of Transformer based language models | ||||
|---|---|---|---|---|
| Models\Data set | COVIDSenti-A | COVIDSenti-B | COVIDSenti-C | COVIDSenti |
| distilBERT | 0.937 | 0.929 | 0.926 | 0.939 |
| BERT | 0.941 | 0.937 | 0.932 | 0.948 |
| XLNET | 0.924 | 0.914 | 0.920 | 0.933 |
| ALBERT | 0.914 | 0.920 | 0.910 | 0.929 |