Skip to main content
. 2022 Sep 26;25(2):493–512. doi: 10.1007/s10796-022-10329-7

Table 1.

Performances of different embedding and classification models

Embedding models Diversity Average F1
Proposed Baseline
(a) English news
all-MiniLM-L12-v2 .7834 .9064 .8641
all-distilroberta-v1 .7767 .8864 .8214
all-mpnet-base-v2 .7653 .8812 .8199
(b) Chinese news
paraphrase-multilingual-MiniLM-L12-v2 .7962 .8824 .7625
distiluse-base-multilingual-cased-v1 .7321 .8801 .7832
distiluse-base-multilingual-cased-v2 .7415 .8695 .7346

Values in bold denote the results of the best performing models.