Table 4.
Reference-dependent alignment accuracy on the ProSup benchmark
Methods | TMalign | Dali | Matt | DeepAlign | BR |
---|---|---|---|---|---|
SPARKS | – | – | – | – | 57.2 |
SALIGN | – | – | – | – | 58.3 |
RAPTOR | – | – | – | – | 61.3 |
SP3 | – | – | – | – | 65.3 |
SP5 | – | – | – | – | 68.7 |
HHpred(Local) | 57.53 | 60.58 | 60.61 | 60.36 | 64.90 |
HHpred(Global) | 61.84 | 65.31 | 64.52 | 65.29 | 69.04 |
BThreader | 60.87 | 64.89 | 63.97 | 64.26 | 76.08 |
CNFpred | 66.26 | 71.16 | 71.06 | 72.01 | 77.09 |
Columns 2–5 correspond to four different tools generating reference alignments. Column ‘BR’ denotes the reference alignments provided in the benchmark. The results of SPARKS, SP3 and SP5, of SALIGN and of RAPTOR are taken form Zhang et al. (2008), from Qiu and Elber (2006) and from Xu (2005), respectively. Bold indicates the best performance.