Skip to main content
. 2022 Feb 17;13:922. doi: 10.1038/s41467-022-28540-0

Table 1.

DeepGuide ablation analysis.

Row Training Layer Pearson, r
Cas12a Cas9
1 Random weights Encoder⇨flatten7 0.070 0.003
2 Back prop-flatten7 Encoder⇨flatten7 0.455 0.312
3 Pretrained + back prop flatten7⇨fc8–10⇨mult11 Encoder⇨flatten7 0.532 0.353
4 Encoder⇨flatten7⇨fc8 0.534 0.310
5 Encoder⇨flatten7⇨fc8⇨fc9 0.517 0.291
6 Encoder⇨flatten7⇨fc8⇨fc9⇨fc10 0.514 0.305
7 Encoder⇨flatten7⇨fc8⇨fc9⇨fc10⇨mult11 0.514 0.388
8 Pretrained + back prop-all Encoder⇨flatten7 0.641 0.409
9 Encoder⇨flatten7⇨fc8 0.658 0.424
10 Encoder⇨flatten7⇨fc8⇨fc9 0.664 0.414
11 Encoder⇨flatten7⇨fc8⇨fc9⇨fc10 0.664 0.414
12 Encoder⇨flatten7⇨fc8⇨fc9⇨fc10⇨mult11 0.664 0.501

Row 1 shows the performance of the encoder (followed by a flatten layer) using random weights (no pre-training or back-propagation); row 2 shows the performance of the encoder (followed by a flatten layer) using random weights and then performing back-propagation only on the flatten layer; rows 3–7 show the performance after pre-training the encoder and then running back-propagation only layers downstream of the encoder; rows 8–12 show the performance after pre-training and then running back-propagation on the whole network (including the encoder); correlation coefficients in bold corresponds to the best performance.

fc fully connected layer, flatten flatten layer, mult multiplication layer (see Supplementary Table 3 for the list of layers).