Skip to main content
. 2023 Sep 25;23(19):8071. doi: 10.3390/s23198071

Table 2.

Average evaluation metric values of different methods on LLVIP dataset. The best value in each metric is denoted in bold, and the second-best score is highlighted with an underline.

Methods EN SF SD QAB/F AG Deep Learning
Wavelet [29] 6.8964 0.0241 9.4977 0.1996 2.0167
FPDE [33] 6.9161 0.0451 9.4264 0.4909 3.6297
ADF [30] 6.9282 0.0489 9.4236 0.5273 3.8861
LatLRR [32] 6.9748 0.0450 9.3101 0.4535 3.2089
TIF [31] 7.0605 0.0635 9.4683 0.6354 4.7440
IFEVIP [34] 7.4487 0.0566 9.6836 0.4957 4.1067
DenseFuse [8] 6.8899 0.0375 9.4237 0.3530 2.9379
FusionGAN [6] 7.0468 0.0293 10.0528 0.2956 2.3374
TarDal [5] 7.1872 0.0511 9.6212 0.3857 3.5221
STDFusionNet [2] 5.4825 0.0522 6.8897 0.4898 3.4384
DIDFuse [10] 6.1477 0.0508 8.0359 0.3605 6.2487
SeAFusion [7] 7.4457 0.0626 9.8828 0.6254 4.7663
DIVFusion [28] 7.5716 0.0547 10.0577 0.3312 4.6006
Ours 7.6913 0.0667 10.1742 0.4865 5.6376