Skip to main content
. Author manuscript; available in PMC: 2017 Oct 30.
Published in final edited form as: Proc Conf Assoc Comput Linguist Meet. 2017;2017:299–309. doi: 10.18653/v1/P17-1028

Table 2.

NER results for Task 1 (crowd label aggregation). Rows 1–3 show non-sequential methods while Rows 4–6 show sequential methods.

Method Precision Recall F1
Majority Vote 78.35 56.57 65.71
MACE 65.10 69.81 67.37
Dawid-Skene (DS) 78.05 65.78 71.39

CRF-MA 80.29 51.20 62.53
DS then HMM 76.81 71.41 74.01
HMM-Crowd 77.40 72.29 74.76