Skip to main content
. Author manuscript; available in PMC: 2020 Jul 13.
Published in final edited form as: J Biomed Inform. 2019 Feb 10;91:103123. doi: 10.1016/j.jbi.2019.103123

Table 5:

Results of experiments with “silver” citation context/subject matter annotations and undersampling. All experiments use the best feature combination (ngraml_pos + posgram + sent + struct + dep + rule). The best results are underlined. The evaluation is based on 10-fold cross-validation.

Experiment Overall Per Category
Accu. MacroF1 Cat Pr. Rec. F1
Base case 0.877 0.741 POS 0.828 0.667 0.739
NEG 0.751 0.442 0.556
NEU 0.891 0.966 0.927
“Silver” citation context 0.868 0.718 POS 0.818 0.638 0.717
NEG 0.709 0.403 0.513
NEU 0.885 0.965 0.923
“Silver” context with normalized subject matter 0.849 0.690 POS 0.736 0.642 0.686
NEG 0.547 0.412 0.470
NEU 0.891 0.937 0.913
1:1 ratio (pos +neg=neu) 0.773 0.726 POS 0.801 0.738 0.768
NEG 0.687 0.513 0.587
NEU 0.774 0.876 0.822
1:2 ratio 0.824 0.709 POS 0.808 0.641 0.715
NEG 0.675 0.425 0.522
NEU 0.840 0.948 0.891
Easy Ensemble 0.839 0.701 POS 0.903 0.517 0.658
NEG 0.438 0.695 0.537
NEU 0.891 0.924 0.907