Skip to main content
. 2023 Oct 17;14:6556. doi: 10.1038/s41467-023-42336-w

Table 1.

Benchmarking error identification of each evaluator with simulation

QUAST-LG CRAQ Inspector Merqury
CREs CSEs CREs CSEs CREs CSEs Total
Recall % 98.061 98.123 95.266 96.207 95.507 28.219 84.616
Precision % 98.957 99.112 99.763 97.942 96.750 97.283 91.091
F1 scorea % 98.507 98.615 97.463 97.067 96.125 43.748 87.734

aF1 score was calculated as F1 score = (2*recall*precision)/(recall + precision). The F1 score was used to measure the accuracy of each evaluator.