Table 4:
Interannotator Agreement (IAA): IAA is calculated as F1 score for all annotation tasks. The IAA for the training is between the two annotators, not including the previous gold-standard.
Annotation batch | category IAA | scope IAA | fuzzy category IAA | fuzzy scope IAA |
---|---|---|---|---|
Prior work (60 articles) | 78% | 87% | 79% | 90% |
Training (8 articles) | 77% | 66% | 78% | 87% |
K.J.S. and S.A. (8 articles) | 81% | 82% | 81% | 93% |
K.J.S. with M.R.B adjudicator* (12 articles) | 88% | 92% | 89% | 95% |
S.A with M.R.B adjudicator* (12 articles) | 89% | 92% | 90% | 96% |
All combined | 82% | 87% | 83% | 92% |
F1 score between annotator and the final gold-standard version after adjudication with M.R.B.