Skip to main content
. 2019 Dec 3;20:627. doi: 10.1186/s12859-019-3217-3

Table 5.

The effectiveness of domain-specific contextual word representation according to the mean F1 scores of 30 different random seeds

Pre-trained word model F1 score
Mean SD Min Max
PubMed word2vec 53.42 2.51 46.67 56.70
general-purpose ELMo 54.30 3.61 42.76 56.51
random-PubMed ELMo 53.81 3.65 38.89 57.01
specific-PubMed ELMo 55.91 1.49 51.24 57.48

All of the highest scores are highlighted in bold except for the SD. The first-row results derive from the best results of previous experiments (i.e., the last row in Table 4). Note: “PubMed word2vec” denotes the context-free word model, “general-purpose ELMo” denotes the general-purpose contextual word model, “random-PubMed ELMo” denotes the domain-general contextual word model based on 118 million randomly selected tokens abstracts from PubMed, and “specific-PubMed ELMo” denotes the domain-specific contextual word model based on 118 million bacterial-relevant abstracts from PubMed