Skip to main content
. 2020 Feb 10;21:51. doi: 10.1186/s12859-020-3395-z

Table 3.

Performance comparison with states-of-the-art models under test pattern 1

Test Set Model auROC auPRC Pearson value Spearman value
Total test set CnnCrispr 0.975 0.679 0.682 0.154
CFD 0.942 0.316 0.343 0.140
MIT 0.77 0.044 0.150 0.085
CNN_std 0.947 0.208 0.321 0.141
DeepCrispr 0.981 0.497 0.133
Hek293t test set CnnCrispr 0.971 0.686 0.712 0.160
CFD 0.936 0.318 0.371 0.143
MIT 0.756 0.048 0.153 0.084
CNN_std 0.939 0.204 0.330 0.144
DeepCrispr 0.984 0.521 0.136
K562 test set CnnCrispr 0.995 0.688 0.426 0.134
CFD 0.965 0.322 0.336 0.128
MIT 0.814 0.033 0.057 0.086
CNN_std 0.983 0.287 0.319 0.132
DeepCrispr 0.953 0.41 0.126

We downloaded the prediction models of CFD, MIT and CNN_std from relevant websites and obtained the prediction results on the same test set as CnnCrispr. Since the training process of CnnCrispr was consistent with DeepCrispr’s, we directly used the test results in Additional file 2 given by DeepCrispr for performance comparison. The numbers in boldface indicate the highest scores for each metric