Table 3.
Baseline classification result. Opt signifies optimizer, Lr signifies learning rate, Adam signifies adaptive moment estimation while SGD signifies stochastic gradient descent, and Lf signifies loss function.
Models | Accuracy (%) | Sensitivity (%) | Specificity (%) | Precision (%) | F1 Score | AUC |
---|---|---|---|---|---|---|
Opt: Adam, Lf: Categorical Smooth Loss, Lr: | ||||||
Baseline | 0.90468 | 0.81116 | 0.93631 | 0.83653 | 0.81355 | 0.87373 |
Opt: Adam, Lf: Categorical Smooth Loss, Lr: | ||||||
Baseline | 0.9097 | 0.81579 | 0.93941 | 0.83356 | 0.81979 | 0.87881 |
Opt: Adam, Lf: Categorical Cross-Entropy, Lr: | ||||||
Baseline | 0.91973 | 0.84466 | 0.94713 | 0.85426 | 0.84114 | 0.89426 |
Opt: SGD, Lf: Categorical Cross-Entropy, Lr: | ||||||
Baseline | 0.90301 | 0.80312 | 0.93645 | 0.81609 | 0.80225 | 0.8729 |