Skip to main content
. 2022 Aug 23;23:354. doi: 10.1186/s12859-022-04847-z

Table 6.

Comparison of effects of models using different pre-training loss

Loss Pre-train Train
Precision Recall F1-score Precision Recall F1-score
PRAUC-loss 0.0351 0.6177 0.0665 0.8599 0.7897 0.8233
Negative-F1 0.1564 0.0987 0.1210 0.3587 0.2107 0.2655
Weighed-logistic 0.0283 0.7559 0.0546 0.8963 0.8015 0.8462
No-pretraining 0.8907 0.5861 0.7070

In the pre-training phase, the hard constraint layer is removed from the network and different loss functions are used for training. In the training phase, we used different pre-trained models for transfer learning to obtain prediction models on the Rfam dataset. The loss function of training phase is Neagtive-F1 function