Table 2.
Pretraining method | 25% train | 50% train | 75% train |
---|---|---|---|
None (random weight initialization) | .670 (± .013) | .712 (± .010) | .731 (± .019) |
Beat classification | .739 (± .014) | .763 (± .011) | .779 (± .014) |
Rhythm classification | .707 (± .018) | .727 (± .028) | .767 (± .012) |
Heart rate classification | .722 (± .010) | .749 (± .018) | .766 (± .011) |
Future prediction | .694 (± .014) | .734 (± .011) | .758 (± .013) |
For each method, we report the average macro score (and the standard deviation) on our test set for the PhysioNet/CinC Challenge 20177,8. We examine 3 sizes of the train set as a proportion of the entire data set: 25%, 50% and 75% (original split). Pretraining allows models to be trained on less data and still achieve the same degree of performance as the same models that are not pretrained.