Skip to main content
. 2017 Oct 10;18:445. doi: 10.1186/s12859-017-1855-x

Table 5.

Performance changes with different input representations on the overall-2013 dataset

Input representation P(%) R(%) F(%)
(1): word without attention 54.7 42.8 48.0
(2): word + att 76.5 67.5 71.7
(3): word + att + pos 70.9 74.7 72.7
(4): word + att + position 79.1 73.9 76.4
(5): word + att + pos + position 78.4 76.2 77.3

Every model in this table uses all preprocessing techniques of our approach. Word without attention denotes the model without the attention mechanism which uses only word embedding. Word + att denotes the model which uses the attention mechanism and word embedding