Table 3.
Reference-dependent alignment accuracy on the SALIGN benchmark
Methods | TMalign | Dali | Matt | DeepAlign |
---|---|---|---|---|
SPARKS | 53.10 | – | – | – |
SALIGN | 56.40 | – | – | – |
RAPTOR | 40.00 | – | – | – |
SP3 | 56.30 | – | – | – |
SP5 | 59.70 | – | – | – |
HHpred(Local) | 60.64 | 62.94 | 62.97 | 63.16 |
HHpred(Global) | 62.98 | 63.14 | 63.87 | 63.53 |
BThreader | 64.40 | 63.13 | 63.05 | 64.09 |
CNFpred | 66.73 | 67.95 | 68.17 | 69.50 |
The results for SALIGN and RAPTOR are taken from (Qiu and Elber, 2006) and from (Xu, 2005), respectively. The results for SPARKS, SP3 and SP5 are taken from Zhang et al. (2008). Bold indicates the best performance. Columns 2–5 correspond to four different tools generating reference alignments.