Skip to main content
. 2021 Dec 17;22(Suppl 1):598. doi: 10.1186/s12859-021-04141-4

Table 17.

Concept normalization exact match results on the core + extensions evaluation annotation set of the 30 held-out documents compared to the baseline ConceptMapper approach

Ontology % OpenNMT class ID (%) % ConceptMapper class ID (%) % ConceptMapper FN class ID (%) % OpenNMT character (%) % ConceptMapper character (%)
ChEBI_EXT 86* 64 26 84* 66
CL_EXT 82* 67 11 93* 84
GO_BP_EXT 80* 34 44 76* 38
GO_CC_EXT 93* 80 18 94* 84
GO_MF_EXT 69* 60 30 69* 64
MOP_EXT 92* 64 35 97* 44
NCBITaxon_EXT 83 86* 13 93* 87
PR_EXT 15* 9 28 72* 21
SO_EXT 92* 19 40 91* 22
UBERON_EXT 81* 68 29 92* 75

We report both the percent exact match on the class ID level and the character level. We also report the percentage of false negatives (FN) for ConceptMapper (i.e. no class ID prediction for a given text mention). Note that the best performance between OpenNMT and ConceptMapper is bolded with an asterisk* for both class ID and character level