Skip to main content
. 2022 May 12;12(2):108–113. doi: 10.4103/jmss.jmss_108_21

Table 2.

Computational comparison between weight pruning-UNet and other models

Model Parameters Flops
UNet 5,680,353 62.4e
UNet (depth-wise + BN) 2,601,921 7.8e
WP-UNet (network pruning + depth-wise + BN) 1,297,441 7.2e

BN – Batch normalization; WP – Weight pruning