Skip to main content
. 2021 Mar 4;11:5251. doi: 10.1038/s41598-021-84374-8

Table 2.

Comparison of the pretraining methods depending on the size of the downstream train set.

Pretraining method 25% train 50% train 75% train
None (random weight initialization) .670 (± .013) .712 (± .010) .731 (± .019)
Beat classification .739 (± .014) .763 (± .011) .779 (± .014)
Rhythm classification .707 (± .018) .727 (± .028) .767 (± .012)
Heart rate classification .722 (± .010) .749 (± .018) .766 (± .011)
Future prediction .694 (± .014) .734 (± .011) .758 (± .013)

For each method, we report the average macro F1 score (and the standard deviation) on our test set for the PhysioNet/CinC Challenge 20177,8. We examine 3 sizes of the train set as a proportion of the entire data set: 25%, 50% and 75% (original split). Pretraining allows models to be trained on less data and still achieve the same degree of performance as the same models that are not pretrained.