Table 6.
Performance comparisons on the test set of TACO. For each method, we report the IoU for waste objects (IoU), mean IoU (mIoU), pixel Precision for waste objects (Prec) and mean pixel precision (Mean). See Section 4.4 for details.
Dataset: | |||||
---|---|---|---|---|---|
TACO (Test) | Backbone | IoU | mIoU | Prec | Mean |
Baseline Approaches | |||||
FCN-8s [17] | VGG-16 | 70.43 | 84.31 | 85.50 | 92.21 |
DeepLabv3 [23] | ResNet-101 | 83.02 | 90.99 | 88.37 | 94.00 |
Proposed Multi-Level (ML) Model | |||||
FCN-8s-ML | VGG-16 | 74.21 | 86.35 | 90.36 | 94.65 |
(+3.78) | (+2.04) | (+4.86) | (+2.44) | ||
DeepLabv3-ML | ResNet-101 | 86.58 | 92.90 | 92.52 | 96.07 |
(+3.56) | (+1.91) | (+4.15) | (+2.07) |