Table 12.
TM-score | MaxSub | GDT | Combined | |
MODCHECK | 0.74 | 0.80 | 0.77 | 0.77 |
ModFOLD | 0.74 | 0.74 | 0.71 | 0.74 |
ProQLG | 0.71 | 0.69 | 0.71 | 0.71 |
PROQ* | 0.71 | 0.74 | 0.69 | 0.69 |
ProQMX | 0.57 | 0.63 | 0.63 | 0.63 |
ModSSEA | 0.51 | 0.51 | 0.51 | 0.51 |
3D-Jury† | 0.54 | 0.49 | 0.57 | 0.49 |
Random | 0.03 | 0.03 | 0.06 | 0.06 |
Similar to Table 10, however the original server ranking is also considered and added to the score as an extra weighting ((6-r)/40, where r is the original server ranking between 1 and 5). The results achieved from a random re-ranking of models from each server (random assignment of scores between 0 and 1) are also shown for comparison. * The official predicted MQAP scores for these methods were downloaded from CASP7 website; all other MQAP methods were run in house during the CASP7 experiment. † MQAP methods which rely on the comparison of multiple models or additional information from multiple servers; all other methods are capable of producing a single score for a single model.