Table 1.
Method | # of targets | Pearson | Spearman | GDT-score | |||
---|---|---|---|---|---|---|---|
Avea | p-valueb | Avec | p-value | Total(ave.)d | p-value | ||
125(TASSER-QA)e | 98 | 0.834 | 0.734 | 58.06(0.592) | |||
TASSER-QA-allf | 98 | 0.828 | 0.51 | 0.791 | 8.6×10−8 | 57.22(0.584) | 0.08 |
634(Pcons) | 98 | 0.811 | 0.16 | 0.757 | 0.18 | 55.05(0.562) | 3.7×10−7 |
556(LEE) | 96 | 0.797 | 0.049 | 0.747 | 0.67 | 53.86(0.561) | 2.4×10−6 |
713(Circle-QA) | 98 | 0.730 | 2.7×10−14 | 0.662 | 1.7×10−5 | 56.06(0.572) | 1.9×10−4 |
633(ProQ) | 98 | 0.719 | 1.1×10−12 | 0.597 | 6.4×10−12 | 54.15(0.553) | 4.5×10−6 |
038(GeneSilico) | 88 | 0.712 | 1.4×10−11 | 0.621 | 1.4×10−8 | 49.09(0.558) | 2.1×10−5 |
692(ProQlocal) | 98 | 0.711 | 2.5×10−11 | 0.591 | 8.1×10−12 | 54.03(0.551) | 3.1×10−6 |
178(Bilab) | 98 | 0.699 | 1.7×10−12 | 0.585 | 2.9×10−10 | 54.53(0.556) | 1.2×10−4 |
704(QA-ModFOLD) | 98 | 0.675 | 5.9×10−15 | 0.600 | 3.9×10−10 | 53.92(0.550) | 2.0×10−6 |
699(ABIpro-h) | 98 | 0.674 | 2.5×10−12 | 0.629 | 6.7×10−7 | 56.49(0.576) | 7.8×10−3 |
091(Ma-OPUS) | 97 | 0.662 | 1.6×10−17 | 0.608 | 5.5×10−9 | 53.25(0.549) | 5.1×10−5 |
013(Jones-UCL) | 81 | 0.647 | 2.3×10−16 | 0.544 | 7.7×10−16 | 45.28(0.559) | 3.5×10−7 |
703(QA-ModCHECK) | 71 | 0.624 | 1.9×10−13 | 0.575 | 2.7×10−8 | 30.61(0.431) | 1.4×10−8 |
717(CaspIta-FRST) | 90 | 0.586 | 1.4×10−25 | 0.518 | 3.9×10−21 | 48.49(0.539) | 6.6×10−10 |
025(Zhang-Server)g | 98 | - | - | 57.35(0.585) | 0.07 | ||
Besth | 98 | - | - | 62.00(0.633) |
Pearson linear correlation coefficient. The results are the average per target. They may be slightly different from those in CASP7 website we did not include models where the alignments alone are given. We also calculate the GDT-scores based on target chains rather than domains.
P-values are given for the differences of results between TASSER-QA and other methods. Difference with p-value of < 0.05 is considered significant at 95% confidence level.
Spearman rank correlation coefficient. The results are the average per target.
Total GDT-score of the server models that are ranked the first by quality assessment prediction methods. Numbers in parenthesis are averages per target.
This work using only first models from servers. Descriptions of all other methods can be found at the CASP73 website http://predictioncenter.org/casp7/.
This work using all models from servers.
Zhang-Server is not a quality-assessment (QA) prediction method. It is the best server whose models were used by QA methods and used here as a baseline for comparison.
Ranked by GDT-score as another baseline.