Skip to main content
. Author manuscript; available in PMC: 2024 Apr 24.
Published in final edited form as: J Biomed Inform. 2022 Mar 26;129:104059. doi: 10.1016/j.jbi.2022.104059

Table 1.

Performance comparison on BiolarkGSC+ and COPD-HPO

BiolarkGSC+ COPD-HPO
Method/Metric Precision Recall F1-score Precision Recall F1-score
OBO Anotator [9] 0.810 0.568 0.668 0.318 0.282 0.299
NCBO [10] 0.777 0.521 0.624 0.756 0.763 0.760
MonarchInitiative [16] 0.751 0.608 0.672 0.741 0.747 0.744
Doc2hpo-Ensemble [15] 0.754 0.608 0.673 0.779 0.755 0.767
MetaMap [12] 0.707 0.599 0.649 0.640 0.781 0.704
Clinphen [11] 0.590 0.418 0.489 0.377 0.328 0.351
NeuralCR [14] 0.736 0.610 0.667 0.543 0.719 0.619
TrackHealth 0.757 0.595 0.666 0.719 0.669 0.693
PhenoTagger [17] 0.720 0.760 0.740 0.623 0.820 0.708
MMRerank 0.754 0.599 0.668 0.822 0.779 0.800
MNIRerank 0.789 0.603 0.683 0.802 0.736 0.768
PTRerank 0.843 0.708 0.770 0.836 0.771 0.802

Note. MMRerank, MNIRerank, and PTRerank represent the re-ranking models based on MetaMap, MonarchInitiative methods, and PhenoTagger. The digits in bold indicate the best scores in terms of the corresponding metrics.