Table 10.
TM-score | MaxSub | GDT | Combined | |
PROQ* | 0.69 | 0.71 | 0.69 | 0.69 |
ModFOLD | 0.66 | 0.66 | 0.66 | 0.66 |
ProQLG | 0.60 | 0.60 | 0.60 | 0.63 |
MODCHECK | 0.46 | 0.51 | 0.57 | 0.60 |
ProQMX | 0.43 | 0.46 | 0.43 | 0.49 |
3D-Jury† | 0.44 | 0.38 | 0.47 | 0.44 |
ModSSEA | 0.20 | 0.17 | 0.23 | 0.20 |
Random | 0.03 | 0.03 | 0.06 | 0.03 |
The proportion of the fold recognition servers (out of the 35 tested) which have been improved according to observed model quality scores through the re-ranking of models using each MQAP method. The results achieved from a random re-ranking of models from each server (random assignment of scores between 0 and 1) are also shown for comparison. * The official predicted MQAP scores for these methods were downloaded from CASP7 website; all other MQAP methods were run in house during the CASP7 experiment. † MQAP methods which rely on the comparison of multiple models or additional information from multiple servers; all other methods are capable of producing a single score for a single model.