Table 3.
NER model/geoparser | Precision | Recall | F-Score |
---|---|---|---|
NCRF++ (literal and associative labels) | 79.9 | 75.4 | 77.6 |
Yahoo! placemaker | 73.4 | 55.5 | 63.2 |
Edinburgh geoparser | 81 | 52.4 | 63.6 |
SpacyNLP | 82.4 | 68.6 | 74.9 |
Google cloud natural language | 91.0 | 76.6 | 83.2 |
NCRF++ (“Location” label only) | 90.0 | 87.2 | 88.6 |
The NCRF++ models’ scores were averaged over fivefolds ( = 1.2–1.3)