Skip to main content
. 2022 Jul 13;22(14):5250. doi: 10.3390/s22145250

Table 9.

Performance comparison of different multi-label classification NILM methods in seen scenarios. The number in bold is the largest of the three model comparisons.

Device Model F1score Precision Recall Accuracy MCC MAE SAE
AHU0 CNN 0.871 0.783 0.982 0.907 0.812 273.71 0.286
TP-NILM 0.926 0.875 0.985 0.948 0.891 169.85 0.160
TTRNet 0.892 0.986 0.819 0.938 0.859 185.19 −0.148
AHU1 CNN 0.683 0.529 0.971 0.800 0.612 702.30 0.836
TP-NILM 0.772 0.650 0.959 0.872 0.716 484.95 0.485
TTRNet 0.951 0.978 0.928 0.979 0.940 142.76 −0.059
AHU2 CNN 0.871 0.781 0.985 0.894 0.798 422.14 0.206
TP-NILM 0.923 0.873 0.984 0.938 0.879 272.84 0.088
TTRNet 0.992 0.998 0.986 0.994 0.988 80.41 −0.057
AHU5 CNN 0.880 0.795 0.985 0.902 0.813 1235.75 0.325
TP-NILM 0.930 0.874 0.996 0.945 0.891 847.91 0.223
TTRNet 0.991 0.990 0.992 0.994 0.986 380.09 0.070