Table 3.
Table comparing the performance of different methods based on SemEval-2016 dataset with in-sample, out-of-sample and 10-fold cross-validation accuracy.
Out-of-sample | In-sample | Cross-validation | ||
---|---|---|---|---|
Accuracy | Accuracy | Accuracy | St. dev. | |
mOnt | 78.31% | 75.31% | 75.31% | 0.0144 |
wOnt | 72.80% | 70.80% | 70.90% | 0.0504 |
sOnt | 76.46% | 73.92% | 73.87% | 0.0141 |
mOnt + CABASC | 85.11% | 82.73% | 80.79% | 0.0226 |
mOnt + LCR-Rot-hop | 86.80% | 88.21% | 82.88% | 0.0224 |
sOnt + CABASC | 83.16% | 79.53% | 72.04% | 0.1047 |
sOnt + LCR-Rot-hop | 84.49% | 86.07% | 79.73% | 0.0348 |
wOnt stands for a similar semi-automatic ontology built with the same methods as sOnt but with words (instead of synsets) as terms.