Table 3.
Class | Original headings | Predicted headings by BidirLSTM |
---|---|---|
1. Correct | 0.7400 (148) | 0.8150 (163) |
2. Maybe correct | 0.1500 (30) | 0.1300 (26) |
3. Incorrect | 0.0850 (17) | 0.0450 (9) |
4. Unable to assess | 0.0250 (5) | 0.0100 (2) |
Automatic evaluation R@1, accuracy (predicted equals original heading) | 0.5800 (116) |
Values are decimal (n). The bottom row shows how the classifier performs on these 200 sentences using the R@1 automatic evaluation metric (accuracy).