Table 1.
Algorithm | FP/TP ratio | Recall | Precision | |||
---|---|---|---|---|---|---|
HIV 100x | HIV 400x | HIV 100x | HIV 400x | HIV 100x | HIV 400x | |
Uncorrected | 53 | 121 | 98.91 | 99.67 | 1.85 | 0.82 |
Quake | 9.26 | 29.5 | 98.63 | 94.84 | 9.74 | 3.27 |
BLESS | 0.71 | 76.7 | 98.38 | 99.36 | 58.48 | 1.28 |
Musket | 0.46 | 121 | 98.46 | 99.67 | 68.48 | 0.82 |
BFC | 2.12 | 112 | 98.47 | 99.57 | 32.01 | 0.89 |
BayesHammer | 0.37 | 69.1 | 98.47 | 98.59 | 73.04 | 1.42 |
Seecer | 12.1 | 110 | 98.49 | 98.31 | 7.65 | 0.90 |
MultiRes | 0.11 | 0.048 | 95.01 | 98.17 | 89.34 | 95.39 |
The False positive/True Positive ratios (FP/TP ratios), Recall, and Precision are compared on two HIV datasets for the methods: Quake, BLESS, Musket, BFC, BayesHammer, Seecer, and the proposed method MultiRes. The error corrected reads from each method are broken into k-mers and compared to the true k-mers in the HIV-1 viral populations. Uncorrected denotes the statistics when no error correction is performed. Bold in each column indicates the best method for the dataset and the metric evaluated.