Table 2.
Methods | TMalign | Dali | Matt | DeepAlign | BR |
---|---|---|---|---|---|
HHpred(Local) | 42.96 | 57.34 | 46.00 | 46.50 | 45.34 |
HHpred(Global) | 48.82 | 53.13 | 51.48 | 52.48 | 51.48 |
MUSTER | – | – | – | – | 46.70 |
BThreader | 47.35 | 51.30 | 50.13 | 50.53 | 50.01 |
CNFpred | 54.17 | 58.46 | 57.26 | 59.14 | 57.06 |
Columns 2–5 indicate four different tools generating the reference alignments. Column ‘BR’ indicates the reference alignments provided in the benchmark. The result for MUSTER is the training accuracy taken from Wu and Zhang (2008). All other numbers are test accuracy. Bold indicates the best performance.