Skip to main content
. 2023 Mar 3;13:3595. doi: 10.1038/s41598-023-30480-8

Table 2.

Comparison of the baseline and proposed models in terms of computational efficiency, memory, and time*.

Model Params FLOPs Size Training Inference
Name (million) (billion) (mb) (mins)** (secs)
Res2Net 23.53 1.75 90.21 2.28 56.80
LDR 21.72 1.40 92.22 1.98 42.21
ESM 23.00 1.67 77.34 2.19 59.43
VIN 27.16 2.1 95.53 2.35 70.10
ShuffleNetV2 1.23 1.24 5.10 1.04 34.53
MnasNet 3.12 0.93 12.41 1.31 26.59
MobileNetV3 2.24 0.73 8.96 0.53 8.75
Ours 0.86 0.65 3.44 0.47 7.53

Number of trainable parameters, training, and inference time may differ depending on the datasets characteristics. *This information is based on experiments using 32 GB NVIDIA Tesla V100-SXM2 GPU. **Average training time per epoch.

Significant values are in [bold].