Skip to main content
. Author manuscript; available in PMC: 2018 Aug 10.
Published in final edited form as: J Health Med Inform. 2017 Jul 15;8(3):272. doi: 10.4172/2157-7420.1000272

Table 3:

Traditional and deep learning classification methods.

Traditional Methods
Study Approach Database Performance** Reference
Features Classifier Top1 Acc. Top5 Acc.
Chen, 2012 SIFT, LBP, color and gabor Multi-class Adaboost Chen 68.3% 90.9% [22]
Beijbom et al., 2015 SIFT, LBP, color, HOG and MR8 SVM 77.4% 96.2% [20]
Anthimopoulos et al., 2014 SIFT and color Bag of Words and SVM Diabetes 78.0% - [31]
Bossard et al., 2014 SURF and L*a*b color values RFDC Food-101 50.8% - [18]
Hoashi et al., 2010 Bag of features, color, gabor texture and HOG MKL Food85 62.5% - [25]
Beijbom et al., 2015 SIFT, LBP, Color, HOG and MR8 SVM Menu-Match 51.2%* [20]
Christodoulidis et al., 2015 Color and LBP SVM Local dataset 82.2% - [34]
Pouladzadeh et al., 2014 Color, texture, size and shape SVM 92.2% - [12]
Pouladzadeh et al., 2014 Graph Cut, color, texture, size and shape SVM 95.0% - [12]
Kawano and Yanai, 2013 Color and SURF SVM - 81.6% [27]
Farinella et al., 2014 Bag of textons SVM PFID 31.3% - [24]
Yang et al., 2010 Pairwise local features SVM 78.0% - [28]
He et al., 2014 DCD, MDSIFT, SCD, SIFT KNN TADA 64.5% - [29]
Zhu et al., 2015 Color, texture and SIFT KNN 70.0% - [10]
Matsuda et al., 2012 SIFT, HOG, Gabor texture and color MKL-SVM UEC-Food-100 21.0% 45.0% [9]
Liu et al., 2016 Extended HOG and Color Fisher Vector 59.6% 82.9% [36]
Kawano and Yanai, 2014 Color and HOG Fisher Vector 65.3% - [39]
Yanai and Kawano, 2015 Color and HOG Fisher Vector 65.3% 86.7% [35]
Kawano and Yanai, 2014 Fisher Vector, HOG and color One x rest Linear classifier UEC-Food-256 50.1% 74.4% [38]
Yanai et al., 2015 Color and HOG Fisher Vector 52.9% 75.5% [35]
Deep Leaning Methods
Study Approach Dataset Topi Top5 Reference
Anthimopoulos et al., 2014 ANNnh Diabetes 75.0% - [31]
Bossard et al., 2014 Food-101 Food-101 56.4% - [18]
Yanai and Kawano, 2015 DCNN-Food 70.4% - [35]
Liu et al., 2016 DeepFood 77.4% 93.7% [36]
Meyers, 2015 GoogleLeNet 79.0% - [11]
Hassannejad et al., 2016 Inception v3 88.3% 96.9% [32]
Meyers, 2015 GoogleLeNet Food201 segmented 76.0% - [11]
Menu-Match 81.4%* -
Christodoulidis et al., 2015 Patch-wise CNN Own database 84.90% - [34]
Pouladzadeh et al., 2016 Graph cut+Deep Neural Network 99.0% - [40]
Kawano and Yanai, 2014 OverFeat+Fisher Vector UEC-Food-100 72.3% 92.0% [39]
Liu et al., 2016 DeepFood 76.3% 94.6% [36]
Yanai and Kawano, 2015 DCNN-Food 78.8% 95.2% [35]
Hassannejad et al., 2016 Inception v3 81.5% 97.3% [32]
Chen and Ngo, 2016 Arch-D 82.1% 97.3% [23]
Liu et al., 2016 DeepFood UEC-Food-256 54.7% 81.5% [36]
Yanai and Kawano, 2015 DCNN-Food 67.6% 89.0% [35]
Hassannejad et al., 2016 Inception v3 76.2% 92.6% [32]
Ciocca et al., 2016 VGG UNIMINB2016 78.3% - [8]
Chen and Ngo, 2016 Arch-D VIREO 82.1% 95.9% [23]
*

Represents the mean average precision

**

Top 1 and/or Top 5 indicate that the performance of the classification model was evaluated based on the first assigned class with the highest probability and/or the top 5 classes among the prediction for each given food item, respectively