Table 3. Comparing performance with benchmark models.
Model | Weighted-AUCROC | Weighted-AUCPR | Weighted-F1 | MCC | ACC |
---|---|---|---|---|---|
HTCInfoMax | 0.9861 | 0.9060 | 0.9030 | 0.8989 | 0.9087 |
ConvTextTM | 0.9806 | 0.8658 | 0.8210 | 0.8393 | 0.8478 |
BERT+HiMatch | 0.9932 | 0.9668 | 0.9172 | 0.9078 | 0.9174 |
HDLTex | 0.9865 | 0.9063 | 0.9030 | 0.8989 | 0.9087 |
DocBERT | 0.9880 | 0.9433 | 0.9173 | 0.9077 | 0.9174 |
Ours | 0.9922 | 0.9639 | 0.9350 | 0.9273 | 0.9348 |
Note:
Bold indicates the highest performance.