Skip to main content
. 2019 Aug 7;27(1):39–46. doi: 10.1093/jamia/ocz101

Table 3.

Performance on test set for relation (Track 2) and end-to-end (Track 3) extraction task of submitted and improved models.

Model Precision Recall F1-score

Relation
* Intra [ensemble] + Inter [ensemble] 0.9463 0.9480 0.9472
Intra [ensemble] + Inter [ensemble] 0.9572 0.9456 0.9514

End-to-end
* NER [recall] + Weighted [ensemble] + Inter [ensemble] 0.9264 0.8318 0.8765
NER [recall] + Intra [ensemble] + Inter [ensemble] 0.9286 0.8321 0.8777

The asterisk indicates our submitted models to the n2c2 shared task.

NER: named entity recognition.