Skip to main content
. 2023 Mar 29;36(4):1332–1347. doi: 10.1007/s10278-023-00801-4

Table 2.

Comparisons of different models in our study

Models Params FLOPs Resolution
(C*H*W)
Batch size Epochs Optimizer lr
ResNet-50 26 M 4112 M 1*244*244 150 100 Adam 5.0e − 05
EfficientNet-b5 30 M 2413 M 1*244*244 130* 100 Adam 5.0e − 05
CoAtNet-0-rw 27 M 4215 M 1*244*244 150 100 Adam 5.0e − 05

FLOPs floating-point operations, lr learning rate

*Reduced batch size owing to the GPU memory limitation