Skip to main content
. 2013 May 2;8(5):e62984. doi: 10.1371/journal.pone.0062984

Table 2. Validation results, using the document as text window.

Subset 25% entities validated 50% entities validated 75% entities validated
Measure Method TP Precision TP Precision TP Precision
2*SimGIC Dict 1,584 34.0 3,117 33.7 4,186 30.0
CRF 1,361 51.1 2,781 52.1 3,761 46.9
2*SimUI Dict 1,424 30.0 2,782 29.5 4,017 28.1
CRF 1,335 49.8 2,632 49.3 3,781 47.6
2*Resnik Dict 1,443 30.4 3,334 35.5 4,371 31.2
CRF 1,449 55.0 2,633 49.0 3,968 49.6

Amount of True Positives (TP) and Precision obtained at selected subsets of validated entities corresponding to 25%, 50% and 75% of the total amount on annotations performed by each tool (Method), using validation calculated using the semantic similarity measure indicated in Measure. For this evaluation was used the document-wide as text window.