Table 2. Comparison of evaluation metrics of HGR framework over Jester dataset.
HGR | ANN | Adaboost | Decision Trees | ||||||
---|---|---|---|---|---|---|---|---|---|
Activities | Precision | Recall | F-measure | Precision | Recall | F-measure | Precision | Recall | F-1 score |
J1 | 0.909 | 0.900 | 0.904 | 0.883 | 0.870 | 0.876 | 0.871 | 0.880 | 0.875 |
J2 | 0.884 | 0.920 | 0.902 | 0.876 | 0.900 | 0.888 | 0.864 | 0.890 | 0.877 |
J3 | 0.927 | 0.900 | 0.913 | 0.912 | 0.910 | 0.911 | 0.903 | 0.910 | 0.906 |
J4 | 0.921 | 0.940 | 0.93 | 0.908 | 0.900 | 0.904 | 0.901 | 0.900 | 0.900 |
J5 | 0.880 | 0.880 | 0.880 | 0.860 | 0.850 | 0.855 | 0.842 | 0.850 | 0.846 |
J6 | 0.893 | 0.910 | 0.901 | 0.878 | 0.880 | 0.879 | 0.874 | 0.820 | 0.846 |
J7 | 0.873 | 0.900 | 0.886 | 0.864 | 0.870 | 0.867 | 0.862 | 0.860 | 0.861 |
J8 | 0.895 | 0.860 | 0.877 | 0.889 | 0.910 | 0.899 | 0.879 | 0.890 | 0.884 |
J9 | 0.883 | 0.910 | 0.896 | 0.875 | 0.930 | 0.902 | 0.860 | 0.880 | 0.870 |
J10 | 0.888 | 0.880 | 0.884 | 0.878 | 0.890 | 0.884 | 0.865 | 0.840 | 0.852 |
J11 | 0.866 | 0.910 | 0.887 | 0.860 | 0.900 | 0.88 | 0.848 | 0.860 | 0.854 |
J12 | 0.870 | 0.870 | 0.870 | 0.862 | 0.850 | 0.856 | 0.857 | 0.880 | 0.868 |
J13 | 0.936 | 0.890 | 0.912 | 0.918 | 0.890 | 0.904 | 0.908 | 0.900 | 0.904 |
Mean | 0.894 | 0.897 | 0.895 | 0.881 | 0.888 | 0.885 | 0.871 | 0.873 | 0.872 |