Skip to main content
. 2019 May;5(5):576–586. doi: 10.1016/j.jacep.2019.02.003

Table 2.

Distribution of Classes Across the Entire Dataset

Architecture (Ref. #) Trainable Parameters (millions) Loss (Lower Is Better) % of Accuracy (Higher Is Better)
DenseNet 121 (9) 7.0 0.36 90.8
Inception V3 (6) 21.9 1.06 79.5
Resnet (7) 23.6 3.24 44.9
VVGNet 16 (5) 14.7 4.33 4.4
Xception (8) 20.9 0.34 91.1

Results of stage 1, in which the 5 architectures are compared, having been trained on only three-fourths of the training data at a time. Performance of 5 network designs. Loss is a special index of inaccuracy which gives penalties for confident wrong predictions more than unconfident ones.