Skip to main content
. 2024 Nov 15;26:e51432. doi: 10.2196/51432

Table 2.

A comprehensive overview of notable publications for 4 volume estimation approaches, arranged chronologically.

Approach and study, year Projects or team Reference object Item Reported error
Pixel density approach

Martin et al [13], 2009 a
  • Physical card

  • N/Ab

  • N/A


Jia et al [63], 2012
  • University of Pittsburgh

  • Circular plate

  • Circular LED light

  • <27.60

  • <54.10


Pouladzadeh et al [32], 2014
  • User’s thumb

  • 5

  • <10


Okamoto and Yanai [64], 2016
  • UECc

  • Wallet

  • 3

  • Mean calorie error

    • Beef rice bowl –242 (SD 55.1)

    • Croquette –47.08 (SD 52.5)

    • Salad 4.86 (SD 11.9)


Akpa et al [65], 2017
  • Chopstick

  • 15

  • <6.65


Liang and Li [66], 2017
  • 1-yuan coin

  • 19 fruits

  • 15 items <20%


Yanai et al [67], 2019 and Ege et al [67], 2019
  • UEC

  • Rice grain size

  • 3

  • <10%

Geometric modeling approach

Zhu et al [24], 2010 and Zhu et al [25], 2008
  • TADAd

  • Checkerboard

  • 7

  • Spherical 5.65%

  • Prismatic 28.85%


Chae et al [69], 2011
  • TADA

  • Checkerboard

  • 26

  • Cylinders 11.1%

  • Flattop solid 11.7%


Chen et al [70], 2013
  • University of Pittsburgh

  • Circular plate

  • 17

  • 3.69%


Jia et al [71], 2014
  • University of Pittsburgh

  • Circular plate

  • Other container

  • 100

  • <30% from 85/100 of test items


Tanno et al [72], 2018
  • UEC

  • Apple ARKit

  • 3

  • Mean calorie error

  • Beef rice bowl –67.14 (SD 18.8)

  • Croquette–127.0 (SD 9.0)

  • Salad –0.95 (SD 0.16)


Yang et al [73], 2019
  • University of Pittsburgh

  • Augmented reality

  • 15

  • Large objects 16.65%

  • Small objects 47.60%


Smith et al [74], 2022
  • Checkerboard

  • 26

  • Single food items 32.4%-56.1%

  • Multiple food items 23.7%-32.6%

3D reconstruction approach

Puri et al [26], 2009
  • 3 images

  • Checkerboard

  • 26

  • 2%-9.5%


Kong and tan [75], 2012
  • 3 images

  • Checkerboard

  • 7

  • Volume estimation error 20%


Rahman et al [76], 2012
  • TADA

  • 2 images

  • Checkerboard

  • 6

  • 7.70%


Chang et al [77], 2013
  • TADA

  • Using food silhouettes to reconstruct a 3D object

  • 4

  • 10%


Anthimopoulos et al [78], 2015
  • GoCARB

  • 2 images physical card Physical card

N/A
  • Volume estimation error 9.4%


Dehais et al [79], 2017
  • GoCARB

  • 2 images

  • Physical card

  • 45 dishes

  • 14 meals

  • 8.2%-9.8%


Gao et al [80], 2018
  • SLAMe-based with Rubik cube

  • 3

  • 11.69%-19.20% for static measurement

  • 16.32%-27.9% for continuous measurement


Ando et al [81], 2019
  • UEC

  • Multiple cameras on iPhone X for depth estimation

  • 3

  • Calorie estimation error

    • Sweet and sour pork <1%

    • Fried chicken <1%

    • Croquette <15%


Lu et al [58], 2020
  • GoCARB

  • 2 images

  • Physical card and gravity information

  • 234 items from MADiMAf

  • MAREg 19%, while their earlier system, GoCarb (2017), achieved 22.6% on the same task [79].

Depth camera approach

Shang et al [82], 2011
  • Specific food recording device

  • No performance report


Chen et al [83], 2012
  • Depth camera

  • No performance report


Fang et al [84], 2016
  • TADA

  • Camera from this study [85]

  • 10

  • Depth method overestimates volume than geometric model


Alfonsi et al [86], 2020
  • iPhone and Android devices

  • 200

  • Carbohydrate estimation error <10 g


Herzig et al [87], 2020
  • iPhone X

  • 128

  • Relative error of weight estimation 14.0%

aNot available.

bN/A: not applicable.

cUEC: University of Electro-Communications.

dTADA: Technology Assisted Dietary Assessment.

eSLAM: simultaneous localization and mapping.

fMADiMA: Multimedia Assisted Dietary Management.

gMARE: mean absolute relative error.