Skip to main content
. 2020 Aug 28;8(3):307. doi: 10.3390/healthcare8030307

Table 4.

The 10-fold cross-validation Micro-F1 score on all methods.

Fold Methods-Other Teams Methods-Our Works Average/Fold
Plain SVM [10] Hierarchical SVMs [10] BOW-BiGRU-Att Word2Vec-BiGRU-Att FastText-BiGRU-Att GloVe-BiGRU-Att ELMo-BiGRU-Att FT-GPT-FC FT-BERT-FC
F-1 0.682 0.739 0.658 0.710 0.728 0.719 0.750 0.743 0.7891 0.724
F-2 0.671 0.698 0.650 0.699 0.701 0.704 0.724 0.722 0.755 0.702
F-3 0.639 0.682 0.643 0.677 0.673 0.680 0.707 0.721 0.750 0.686
F-4 0.693 0.743 0.669 0.724 0.737 0.727 0.745 0.768 0.778 0.732
F-5 0.658 0.721 0.645 0.681 0.712 0.691 0.722 0.730 0.762 0.702
F-6 0.677 0.728 0.662 0.700 0.680 0.703 0.731 0.735 0.771 0.710
F-7 0.642 0.690 0.631 0.686 0.719 0.695 0.712 0.721 0.753 0.694
F-8 0.669 0.729 0.660 0.712 0.723 0.719 0.736 0.744 0.776 0.719
F-9 0.690 0.735 0.668 0.703 0.718 0.702 0.749 0.747 0.791 0.723
F-10 0.678 0.723 0.649 0.681 0.691 0.677 0.721 0.730 0.762 0.701
Average/Method 0.670 0.719 0.654 0.697 0.708 0.702 0.730 0.736 0.769 /

1 The FT-BERT-FC gets the best performance with the bold number in each fold.