Skip to main content
. 2018 May 21;46(Web Server issue):W296–W303. doi: 10.1093/nar/gky427

Table 1. Performance comparison in the context of the CAMEO continuous evaluation platform.

Server Response time (hh:mm:ss) (N = 168) lDDT total (N = 168) lDDT easy (N = 37) lDDT medium (N = 90) lDDT hard (N = 41) lDDT BS (N = 69) QS-Score (N = 32) Model confidence (N = 168)
SWISS-MODEL 00:15:48 66.22 86.01 69.71 40.67 70.88 63.95 0.85
HHpredB 01:16:15* 65.95 82.10* 69.68 43.18 71.47 - 0.79*
NaiveBLAST 01:20:27* 58.93* 82.86* 64.20* 25.76* 63.88* - 0.68*
PRIMO 02:12:08* 60.26* 84.51* 65.07* 27.82* 67.30* - 0.67*
SPARKS-X 02:35:21* 63.14* 80.06* 65.57* 42.53 67.76* - 0.54*
RaptorX 06:28:57* 69.15* 83.35* 72.10* 49.88* 68.85 - 0.65*
IntFOLD4-TS 32:47:59* 68.41* 83.76* 70.88 49.11* 71.65 - 0.84
Robetta 37:00:07* 71.60* 85.17 74.00* 54.08* 67.48* 60.20 0.81*

Performance is measured based on a benchmark dataset of 250 targets collected during the CAMEO time range 20 October 2017–13 January 2018. Results from SWISS-MODEL and seven other modelling servers were collected from CAMEO and the performance evaluated on a common subset of targets where all compared servers returned a model. Each column indicates average performance values in terms of Response Time, model accuracy (IDDT, QS-score) and self-assessment of model quality (Model Confidence). lDDT evaluation has further been split according to CAMEOs definition of target difficulty; per column subset sizes are shown in brackets. Asterisks indicate a statistically significant difference (P-value < 0.05) compared to SWISS-MODEL based on paired t-test.