Skip to main content
. 2020 May 14;21:188. doi: 10.1186/s12859-020-3517-7

Table 4.

Evaluation against the CDR corpus

Precision Recall F 1
SemRep-ALL 0.90 0.24 0.38
SemRep-SENTENCE 0.90 0.35 0.50
Xu et al. [19] 0.56 0.58 0.57
Peng et al. [20] 0.66 0.57 0.61

SemRep-ALL indicates the case in which all ground truth relations are taken into account. SemRep-SENTENCE indicates the scenario in which only the intra-sentence ground truth relations are considered. Xu et al. [19] was the top-ranking system in the BioCreative V CID task and Peng et al. [20] reported best post-challenge results. Both systems perform end-to-end relation extraction