Skip to main content
. 2021 Oct 8;9(10):e29584. doi: 10.2196/29584

Table 2.

Accuracy comparisons.


Accuracy F score
Coder average 0.795861 0.739396

Coder 1: EK 0.833211 0.796272

Coder 2: SCM 0.775165 0.710356

Coder 3: SD and CD 0.779206 0.711559
Neural network: no embeddings 0.436697 0.436697
Neural network: GloVea word embeddings 0.544954 0.457813
LSTMb: no embeddings 0.631193 0.549997
LSTM+GloVe word embeddings 0.655046 0.593942
BERTc: default weights 0.766972d 0.718878
BERT: domain-specific 0.818349d 0.775830

aGloVe: Global Vectors for Word Representation.

bLSTM: long short-term memory.

cBERT: Bidirectional Encoder Representations from Transformers.

dThe final accuracy scores for the Bidirectional Encoder Representations from Transformers–based models are based on selecting the best network from the results based on the results from the development data set. The reported numbers are from evaluating the training data set.