Skip to main content
. 2021 Dec 17;22(Suppl 1):598. doi: 10.1186/s12859-021-04141-4

Table 7.

Hardware, memory, and time used for training for all evaluated algorithms

Algorithm Hardware Training memory (GBs) Training time (h)
CRF CPUs 2–13 1–4
BiLSTM* GPUs/CPUs** 17 29
BiLSTM-CRF CPUs 7 15
Char-Embeddings CPUs 30 84
BiLSTM-ELMo* GPUs 42 700–1000
BioBERT GPUs/CPUs** 5 20
UZH@CRAFT-ST BioBERT* [4] GPUS 120*** 200
OpenNMT* CPUs 620 515
ConceptMapper [20] CPUs N/A N/A

A given training time specifies the total hours if training for all ontology annotation sets were run consecutively, but these can be parallelized by ontology

ConceptMapper runs on CPUs but has no training, as it is a dictionary-based lookup tool, hence the specifications as N/A

*Parallelized per ontology due to time constraints

**Runs significantly faster on GPUs

***Total free RAM available