Skip to main content
. 2017 Apr 29;45(Web Server issue):W416–W421. doi: 10.1093/nar/gkx332

Table 2. Independent benchmarking of global scoring with official CASP12 data.

GDT_TS LDDT CAD(AA) SG
Rank Gr.Name Gr.Model AUC AUC AUC AUC
1 ModFOLD6_rank QA072_1 0.993 0.99 0.926 0.962
2 ModFOLD6_cor QA360_1 0.995 0.988 0.885 0.949
3 ModFOLD6 QA201_1 0.994 0.988 0.878 0.944
4 qSVMQA QA120_1 0.982 0.983 0.862 0.937
5 ProQ3 QA213_1 0.985 0.978 0.892 0.916
6 ProQ3_1_diso QA095_1 0.982 0.978 0.891 0.922
7 ProQ3_1 QA302_1 0.981 0.977 0.889 0.917
8 ProQ2 QA203_1 0.944 0.971 0.921 0.932
9 MUfoldQA_S QA334_1 0.977 0.968 0.898 0.913
10 MULTICOM-CLUSTER QA287_1 0.956 0.968 0.893 0.921

The ability of methods to separate good models (accuracy score ≥ 50) from bad (<50) according to GDT_TS, LDDT, CAD and SG scores is evaluated using the Areas Under the Curve (AUC) (see http://predictioncenter.org/casp12/doc/presentations/CASP12_QA_AK.pdf). Only the top 10 methods are shown and the table is sorted using LDDT scores. The scores are calculated over all models for all targets (QA stage 1–select 20). The table is sorted by the LDDT AUC score. Data are from http://predictioncenter.org/casp12/qa_aucmcc.cgi. See also Supplementary Tables S5–10.