Table 13.
Comparisons of recall on RMBD-DB1.
| Methods | IoU Threshold | Average | |||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.3 | 0.35 | 0.4 | 0.45 | 0.5 | 0.55 | 0.6 | 0.65 | 0.7 | 0.75 | 0.8 | 0.85 | 0.9 | 0.95 | ||
| YOLOv2 [45] | 0.959 | 0.947 | 0.928 | 0.898 | 0.851 | 0.749 | 0.596 | 0.410 | 0.241 | 0.129 | 0.050 | 0.019 | 0.003 | 0.000 | 0.484 |
| DeepDeblur [23] + YOLOv2 | 0.991 | 0.986 | 0.978 | 0.956 | 0.907 | 0.799 | 0.657 | 0.459 | 0.264 | 0.127 | 0.057 | 0.016 | 0.005 | 0.001 | 0.514 |
| DeblurGAN [22] + YOLOv2 | 0.960 | 0.949 | 0.933 | 0.907 | 0.871 | 0.829 | 0.756 | 0.657 | 0.534 | 0.376 | 0.203 | 0.080 | 0.021 | 0.002 | 0.577 |
| DeblurGAN (MobileNet) [50] + YOLOv2 | 0.973 | 0.966 | 0.957 | 0.942 | 0.915 | 0.861 | 0.798 | 0.695 | 0.540 | 0.367 | 0.176 | 0.075 | 0.018 | 0.003 | 0.592 |
| DCSCN + lightDenseYOLO [21] | 0.960 | 0.947 | 0.929 | 0.899 | 0.852 | 0.749 | 0.597 | 0.409 | 0.242 | 0.128 | 0.051 | 0.019 | 0.004 | 0.000 | 0.485 |
| SlimDeblurGAN + YOLOv2 (ours) | 0.975 | 0.969 | 0.955 | 0.935 | 0.911 | 0.870 | 0.810 | 0.709 | 0.570 | 0.411 | 0.220 | 0.103 | 0.032 | 0.005 | 0.605 |