Table 5.
For the evaluation of the machine translation task, we evaluated the BLEU score on 8 language pairs.
|
|
BLEU score | |||||||
|
|
en-esa | en-frb | en-roc | en-csd | en-dee | en-huf | en-plg | en-svh |
| MarianMT | 38.02 | 33.02 | 40.45 | —i | — | — | — | — |
| F.T-MarianMT | 41.64 | 43.72 | 43.88 | — | — | — | — | — |
| F.T-mT5 | 45.88j | 53.29j | 47.28j | 43.30 | 50.73 | 32.25 | 40.24 | 44.17 |
aen-es: English-Spanish.
ben-fr: English-French.
cen-ro: English-Romanian.
den-cs: English-Czech.
een-de: English-German.
fen-hu: English-Hungarian.
gen-pl: English-Polish.
hen-sv: English-Swedish.
iNot applicable.
jThe superior score within the same data set.