Table 2.
Docs opened | Total feedback | Unique feedback | Unseen feedback | Model count | Error count | Type of feedback |
|||
---|---|---|---|---|---|---|---|---|---|
Word Tree (docs) | Span | Label | |||||||
p1 | 129 | 86 | 66 | 14 | 17 | 2 | 1 (19) | 16 | 51 |
p2 | 117 | 125 | 112 | 1 | 12 | 6 | 4 (40) | 24 | 61 |
p3 | 71 | 144 | 88 | 5 | 12 | 3 | 2 (16) | 28 | 98 |
p4 | 94 | 162 | 93 | 4 | 12 | 0 | 6 (43) | 26 | 106 |
p5 | 104 | 141 | 133 | 14 | 7 | 1 | 1 (18) | 6 | 117 |
p6 | 170 | 301 | 230 | 29 | 6 | 2 | 3 (44) | 58 | 181 |
p7 | 50 | 243 | 202 | 141 | 6 | 2 | 8 (190) | 21 | 32 |
p8 | 54 | 63 | 55 | 0 | 6 | 1 | 0 (0) | 23 | 40 |
p9 | 68 | 91 | 81 | 0 | 10 | 2 | 0 (0) | 34 | 57 |
Indicates documents viewed; we do not assume agreement in case of no feedback | Number of feedback items provided | Unique document labels inferred from feedback | Documents labeled without viewing them first (when using Word Tree) | Number of training iterations | Conflicts and overrides in provided feedback | Feedback items provided using the different feedback input mechanisms: the Word Tree view (along with documents affected), highlighting spans or assigning a label to the document |
Participants p1–p4 started with an initial model trained on 10 documents, while p5–p9 started with 30 documents.