Skip to main content
. 2021 Mar 4;11:5251. doi: 10.1038/s41598-021-84374-8

Table 5.

Comparison of the pretraining methods on two new downstream data sets.

PTB-XL8,28 ICBEB201839
AUC Fmax AUC Fmax Fβ=2 Gβ=2
Pretraining method
None (random weight initialization) .942 (± .016) .918 (± .004) .954 (± .008) .833 (± .017) .787 (± .025) .553 (± .028)
Beat classification .962 (± .006) .926 (± .004) .961 (± .004) .854 (± .005) .814 (± .008) .591 (± .012)
Rhythm classification .958 (± .013) .922 (± .006) .956 (± .006) .848 (± .012) .807 (± .016) .575 (± .023)
Heart rate classification .965 (± .003) .923 (± .003) .951 (± .006) .833 (± .012) .790 (± .013) .558 (± .018)
Future prediction .955 (± .004) .920 (± .003) .955 (± .005) .844 (± .011) .802 (± .017) .572 (± .021)
Related work
Strodthoff et al.28 .957 (± .015) .917 (± .008) .974 (± .005) .855 (± .020) .819 (± .028) .602 (± .044)

For each method, we report the average performance (and the standard deviation) on the respective test sets. We observe performance improvements from pretraining on both data sets.

Bold numbers in ICBEB2018 dataset reflect the best overall performance (i.e. that of Strodthoff et al.28), instead of the best pretraining performance (i.e. Beat classification).