Skip to main content
. 2017 Jul 19;15:388–395. doi: 10.1016/j.csbj.2017.07.001

Table 1.

Comparison of performance metrics of error detection on simulated HIV datasets. FP/TP ratio is the measure of false positive to true positive ratio, Recall measures the percentage of true k-mers out of all true k-mer predicted by an algorithm, Precision measures the percentage of predicted k-mers by an algorithm that are true k-mers.

Algorithm FP/TP ratio Recall Precision
HIV 100x HIV 400x HIV 100x HIV 400x HIV 100x HIV 400x
Uncorrected 53 121 98.91 99.67 1.85 0.82
Quake 9.26 29.5 98.63 94.84 9.74 3.27
BLESS 0.71 76.7 98.38 99.36 58.48 1.28
Musket 0.46 121 98.46 99.67 68.48 0.82
BFC 2.12 112 98.47 99.57 32.01 0.89
BayesHammer 0.37 69.1 98.47 98.59 73.04 1.42
Seecer 12.1 110 98.49 98.31 7.65 0.90
MultiRes 0.11 0.048 95.01 98.17 89.34 95.39

The False positive/True Positive ratios (FP/TP ratios), Recall, and Precision are compared on two HIV datasets for the methods: Quake, BLESS, Musket, BFC, BayesHammer, Seecer, and the proposed method MultiRes. The error corrected reads from each method are broken into k-mers and compared to the true k-mers in the HIV-1 viral populations. Uncorrected denotes the statistics when no error correction is performed. Bold in each column indicates the best method for the dataset and the metric evaluated.