Skip to main content
. 2023 Mar 15:1–22. Online ahead of print. doi: 10.1007/s11042-023-14790-7

Table 1.

Experimental setup

Parameter name Parameter value
pre-training config DenseNet121
optimizer Adam
base learning rate 1e-4
weight decay 3e-4
optimizer momentum β1, β2 = 0.9,0.999
batch size 32
training epochs 40