Skip to main content
. 2021 Mar 4;11:5251. doi: 10.1038/s41598-021-84374-8

Table 4.

Comparison of the pretraining methods depending on the architecture of the model (i.e. residual network).

Pretraining method ResNet-18v2 ResNet-34v2 ResNet-50v2
None (random weight initialization) .731 (±  .019) .764 (± .012) .708 (± .023)
Beat classification .779 (± .014) .794 (± .018) .775 (± .015)
Rhythm classification .767 (± .012) .775 (± .020) .760 (± .008)
Heart rate classification .766 (± .011) .771 (± .008) .761 (± .019)
Future prediction .758 (± .013) .761 (± .014) .743* (± .010)

For each method, we report the average macro F1 score (and the standard deviation) on our test set for the PhysioNet/CinC Challenge 20177,8. Employing the ResNet-34v2 improves the performance of every pretraining method. We suspect that ResNet-34v2 lies in a sweet spot between model complexity and performance, whereas ResNet-18v2 underfits and ResNet-50v2 overfits to the training data.

*Due to a spike in the model complexity, we only pretrain the first 3 stages of the ResNet-50v2.