Skip to main content
. 2020 Jul 8;20(14):3816. doi: 10.3390/s20143816

Table 5.

Average per-image inference time on MJU-Waste. The baseline method is DeepLabv3 with a ResNet-50 backbone, which corresponds to the scene-level inference time. Additional object and pixel level inference incurs extra computational costs. System specs: i9-9900KS CPU, 64GB DDR4 RAM, RTX 2080Ti GPU. Test batch size set to 1 with FP32 precision. See Section 4.3 for details.

MJU-Waste (val) Scene-Level Object-Level Pixel-Level Total
inference time (ms) 52 352 398 802