Skip to main content
. 2018 Apr 2;46(12):e72. doi: 10.1093/nar/gky237

Table 1. Reported prediction results from different studies.

Studya Number of TFs Number of unique TFBSs Dataset size (Total TF-TFBS links) True TF-TFBS links False TF-TFBS links Testing method Highest accuracy
Qian Z. et al. (reported in (33)) 480 2,341 10,206 3,356 6,850 Leave one out 76.6%
Qian Z. et al. (reported in (34)) 143 571 10,430 3,430 7,000 Leave one out 87.9%
Cai, Y. et al. (reported in (35)) 599 2,402 35,410 3,541 31,869 Leave one out 91.1%
DRAF models (on the datasets from this study) 232 44,710 1,214,389 110,399 1,103,990 30% holdout 99.16%

aThis table shows the prediction accuracy of the DRAF models on the holdout dataset (30% of the total), and the other models as reported in the original references (33–35) that used different TF-TFBS test datasets. Our holdout dataset is 34-, 116- and 119-fold larger than the datasets from (35), (34) and (33), respectively. The test dataset for DRAF has 364 317 (positive and negative) TF-TFBS links which is more than 10 times larger than the next largest dataset used in (35).