Skip to main content
Digital Health logoLink to Digital Health
. 2016 Jul 12;2:2055207616657212. doi: 10.1177/2055207616657212

Crowdsourcing for self-monitoring: Using the Traffic Light Diet and crowdsourcing to provide dietary feedback

Gabrielle M Turner-McGrievy 1,, Sara Wilcox 2,3, Andrew T Kaczynski 1,2, Donna Spruijt-Metz 4,5, Brent E Hutto 2, Eric R Muth 6, Adam Hoover 7
PMCID: PMC6001271  PMID: 29942561

Abstract

Background

Smartphone photography and crowdsourcing feedback could reduce participant burden for dietary self-monitoring.

Objectives

To assess if untrained individuals can accurately crowdsource diet quality ratings of food photos using the Traffic Light Diet (TLD) approach.

Methods

Participants were recruited via Amazon Mechanical Turk and read a one-page description on the TLD. The study examined the participant accuracy score (total number of correctly categorized foods as red, yellow, or green per person), the food accuracy score (accuracy by which each food was categorized), and if the accuracy of ratings increased when more users were included in the crowdsourcing. For each of a range of possible crowd sizes (n = 15, n = 30, etc.), 10,000 bootstrap samples were drawn and a 95% confidence interval (CI) for accuracy constructed using the 2.5th and 97.5th percentiles.

Results

Participants (n = 75; body mass index 28.0 ± 7.5; age 36 ± 11; 59% attempting weight loss) rated 10 foods as red, yellow, or green. Raters demonstrated high red/yellow/green accuracy (>75%) examining all foods. Mean accuracy score per participant was 77.6 ± 14.0%. Individual photos were rated accurately the majority of the time (range = 50%–100%). There was little variation in the 95% CI for each of the five different crowd sizes, indicating that large numbers of individuals may not be needed to accurately crowdsource foods.

Conclusions

Nutrition-novice users can be trained easily to rate foods using the TLD. Since feedback from crowdsourcing relies on the agreement of the majority, this method holds promise as a low-burden approach to providing diet-quality feedback.

Keywords: Diet, crowdsourcing, self-monitoring, mobile health, mobile applications, food, smartphone, weight loss, photography

Introduction

Dietary self-monitoring is one of the key components of behavioral weight loss programs.1,2 Adherence to self-monitoring3 and receiving personalized feedback on self-monitoring behaviors4,5 are both associated with improved weight loss. Diet apps have held promise as a way to increase self-monitoring frequency, but usage tends to decline over time.68 Smartphone cameras make just-in-time food recording possible,9 and researchers have been developing ways to conduct dietary assessment of foods in photos.10 There has also been increasing interest in finding computerized methods to make dietary assessment easier.11 However, dietary self-monitoring differs from dietary assessment12 in that dietary assessment is infrequent and must be highly accurate, whereas self-monitoring must occur every time something is consumed, as the more proximal the feedback is to the desired behavior, the more likely it is for that behavior to be sustained.13 High degrees of accuracy, however, are not always crucial for dietary self-monitoring. This is because dietary self-monitoring is not used for research data collection and usually involves tracking a general, single factor of interest only, such as an estimate of energy intake, versus a very detailed level of dietary data (e.g. mgs of calcium, µg of selenium).

One option for providing dietary self-monitoring feedback is to use crowdsourcing, which utilizes the input of several users to provide feedback. Crowdsourcing can take on many roles, including collectively raising money (crowdfunding), completing tasks (crowd labor), conducting research (crowd research), and generating new products and ideas (creative crowdsourcing).14 Crowdsourcing dietary information would be a hybrid of crowd labor and crowd research, allowing users to give quick collective feedback on food and beverages consumed, thus providing users with an overall rating of their diets. This crowdsourcing diet feedback approach also has the potential to reduce the burden and increase the gamification of self-monitoring,14 which could help make self-monitoring more engaging and rewarding for users.

Previous research has examined the use of meal photos and crowdsourcing for dietary self-monitoring.15 The Eatery app, which is no longer available to consumers, allowed users to take pictures of their foods with the app, rate their meals using a sliding scale from fit (healthy) to fat (unhealthy), and were then prompted to rate the photographs of foods and beverages from other users. In addition, users received peer feedback as an average healthiness score for their own foods and beverages. This study15 assessed how closely the crowdsourced ratings of foods and beverages contained in 450 pictures from the Eatery mobile app as rated by peer users (fellow Eatery app users) (n = 5006 peers, mean 18.4 peer ratings/photo) using the simple “healthiness” scale were related to the ratings of the same pictures by trained observers (raters). The average of all three trained raters’ scores was highly correlated with the peer healthiness score for all the photos (r = 0.88, P < .001). These findings suggest that crowdsourcing holds potential to provide basic feedback on overall diet quality to users utilizing a low-burden approach.

The present study examined the use of the Traffic Light Diet (TLD)16,17 as a diet rating method using crowdsourcing. The goal of the TLD approach is to “provide the most nutrition with the least number of calories,”18 categorizing foods as red (eat very rarely, low-nutrient-dense, high calorie), yellow (eat in moderation), and green (low in calories, high-nutrient-dense). The TLD has been mainly used in assisting children with dietary self-monitoring to encourage the intake of low-energy-dense foods and promote weight loss.16,19 The TLD approach has also been widely used to assist adults with making healthier food point-of-purchase decisions, such as in cafeterias,20,21 at concession stands,22 and on food labels.23 More recently, there has been an interest in using the TLD approach for self-monitoring with adults, as the TLD can be used with low-literacy populations.24 Previous research has also demonstrated that rating foods with a traffic light system has the potential to promote long-term changes in dietary intake21 and can provide a salient nutrition label that triggers processes within the brain – as detected by functional magnetic resonance imaging (fMRI) – that are used by adults who are successful at making healthy diet choices.25

The present study had five main objectives, including examining: 1) if users could accurately crowdsource photos of foods as red, yellow, or green after receiving a brief training on the TLD; 2) if the accuracy of the ratings of foods categorized as red, yellow, or green differed from one another; 3) if the accuracy of the crowdsourced food categories increased by adding more participants to crowdsource the foods; 4) which demographic characteristics, technology use, and/or nutrition knowledge factors were associated with correctly categorizing foods; and 5) how users perceived the difficulty level of using various dietary self-monitoring methods.

Methods

Participants were recruited via Amazon Mechanical Turk (MTurk; www.mturk.com) to complete a survey (www.surveygizmo.com). MTurk is an online system that allows requesters to submit Human Intelligence Tasks (HITs) for online workers to complete in return for monetary compensation.26 The demographic characteristics of MTurk workers tend to be more diverse than average internet survey populations.26 For the present study, our sample of eligible participants was limited to US citizens over the age of 18 years, who were MTurk Masters – a group of workers who have demonstrated consistent reliability in completing HITs as determined by MTurk. Participants were paid US$0.50 for completion of the survey, which is similar to or higher than compensation rates used in previous MTurk studies.2628 The study was approved by a university Institutional Review Board, and participants provided informed consent prior to beginning the survey.

After accepting the HIT, participants were directed to a survey which assessed demographic information, technology ownership (tablet or smartphone), use of diet app or physical activity app or device, and prior training in and knowledge of nutrition. The survey also presented the user with a one-page description on how to rate foods using the TLD. Ten foods were selected that represented all major food groups and were placed in random order within the survey. Participants then categorized the foods as red (potato chips, white-flour bagel, ham luncheon meat), yellow (whole-grain spaghetti with marinara sauce, fat-free plain yogurt, brown rice, black beans), or green (apple, salad, carrots). Additionally, to examine the perceived ease of use of the crowdsourcing approach in context with other potential dietary self-monitoring methods, participants were asked to rate, on a 9-point Likert scale, how easy (1) or difficult (9) each of the following would be for dietary self-monitoring: using a photo-taking/crowdsourcing method, using a mobile diet app, using a book to find calorie values of foods/beverages and calculate energy intake, or wearing a Bite Counter29 to provide an automated estimate of caloric intake. The Bite Counter is worn like a watch and tracks wrist motion in three planes using a microelectromechanical systems gyroscope.30 When a pattern of wrist roll motion is detected, a counter is activated to track the number of bites taken at each eating or drinking event, tracking bite frequency but not bite size. Calculations based on the Mifflin-St Jeor formula for resting metabolic rate31 have been used previously to estimate an individual’s kilocalories per bite (KPB) based on demographic variables. These equations have been tested and refined using both dietary data from 24-hour recalls as a gold standard32 and by observing 273 individuals eating a meal in a cafeteria,33 and were found to estimate calories consumed in an individual meal to +/– 50 kcals. Each dietary self-monitoring method included a detailed description to provide participants with a clear overview of what each method would entail.

Two different accuracy scores were calculated. Each participant received an accuracy score (participant accuracy score), which reflected the total number of correctly categorized foods (as red, yellow, or green) per person out of the 10 foods viewed (possible accuracy score range 0%–100%). Each food received an accuracy score as well. Food accuracy scores were calculated from the pictures of the food item that were correctly categorized out of a possible 75 participant ratings for that food (possible score range 0%–100%).

To date, no research has been conducted on the number of participants needed to crowdsource dietary information accurately. It is not known whether only a few users (e.g. 15 users) are needed to provide feedback or several users (e.g. 45 users) are required to come to a majority agreement on the dietary feedback. Therefore, this study also sought to examine if mean participant accuracy scores increased as more participants were included in the crowdsourcing of the food ratings. To achieve this, a random list of numbers from 1 to 75 was generated and assigned to participants. Participants were then sorted in this random order from 1 to 75. Five groups of participants with their corresponding participant accuracy scores were then created: 1) first random 15 participants (n = 15); 2) Group 1 plus the next random group of participants (n = 30); 3) Group 2 plus the next random group of participants (n = 45), etc. This procedure simulated how accuracy could change as more participants are added to the crowdsourcing of foods.

Statistical analysis

Descriptive statistics were used to characterize the sample. Means ( ± standard deviations (SDs)) were calculated for food accuracy scores, participant accuracy scores, and scores reflecting methods and features that would motivate users to consistently engage in dietary self-monitoring. For each of a range of possible crowd sizes (e.g. the five groups of different crowd sizes described in the previous paragraph), 10,000 bootstrap samples were drawn, and a 95% confidence interval (CI) for accuracy was constructed using the 2.5th and 97.5th percentiles. CI width describes the uncertainty with which a given crowd size will produce results similar to the study sample. General linear models were used to test whether demographic characteristics (model 1) or technology use and nutrition knowledge (model 2) were associated with participant accuracy score. Frequency distributions were calculated to examine what methods and features participants endorsed as motivating them the most to self-monitor regularly. Analyses were conducted using SAS V9.4 software with a P-value of .05 indicating statistically significant differences.

Results

A total of 75 participants completed the survey. Participants were mostly overweight (mean body mass index (BMI) 28.0 ± 7.5 kg/m2), non-college educated (58%), white (85%) females (55%) who were currently attempting weight loss (59%) (Table 1). The mean accuracy score per participant (percentage of food pictures correctly identified out of 10) was 77.6 ± 14.0%, with a range of 50%–100%. More than 50% of participants accurately categorized all 10 foods as red, yellow, or green (Table 2). Green foods received the highest food accuracy scores (mean, range) (99.7%, 99%–100%), followed by yellow (68.8%, 63%–76%), and red (68%, 52%–100%).

Table 1.

Demographic characteristics of Amazon Mechanical Turk participants completing crowdsourcing data collection survey.

Characteristics Mechanical Turk Survey participants (n = 75)
Mean age (years) (±SD) 36.0 ± 10.6
Sex
 Female 41 (55%)
 Male 34 (45%)
Hispanic
 Yes 2 (3%)
 No 73 (97%)
Race
 Black 6 (8%)
 White 64 (85%)
 Other 5 (7%)
Education
 Some high school 2 (3%)
 High school 9 (12%)
 Some college 32 (43%)
 College graduate 26 (34%)
 Advanced degree 6 (8%)
 Mean BMI (kg/m2) (±SD) 28.0 ± 7.5
Current weight loss status
 Not trying to lose weight 31 (41%)
 Trying to lose weight 44 (59%)
Has attempted weight loss in the past
 Yes 67 (89%)
 No 8 (11%)
Owns a smartphone or tablet
 Yes 68 (91%)
 No 7 (9%)
Currently uses a wearable tracker to self-monitor exercise or sleep
 Yes 8 (11%)
 No 67 (89%)
Currently uses an app to self-monitor diet
 Yes 22 (29%)
 No 53 (71%)
Has taken a college-level nutrition course
 Yes 13 (17%)
 No 62 (83%)

SD: standard deviation; BMI: body mass index.

Table 2.

Food accuracy score ratings for each food item rated by participants (n = 75).

Food (n = 75 participants rating each food) Number of participants categorizing food as green Number of participants categorizing food as yellow Number of participants categorizing food as red Percentage of participants who classified food correctly
Green foods
 Apple 74* 1 0 99%
 Green salad 75* 0 0 100%
 Carrots, raw 75* 0 0 100%
Yellow foods
 Whole-grain spaghetti with marinara sauce 0 55* 20 73%
 Brown rice 14 57* 4 63%
 Black beans 25 47* 3 63%
 Plain, low-fat yogurt 27 47* 1 63%
Red foods
 Potato chips 0 1 74* 100%
 Ham luncheon meat 5 31 39* 52%
 Bagel, plain 0 36 39* 52%
*

Indicates correct categorization of Traffic Light Diet color.

This study also examined the extent to which a smaller number of users included in the crowdsourcing lead to greater variability in the mean ratings obtained. Large crowd sizes such as 75 produced mean ratings falling mostly in a narrow range, with 95% of means in the bootstrap analysis with n = 75 falling between 74.4 and 80.7. Of the five different crowd sources examined, the CIs increased as expected with decreasing crowd size (n = 15, 95%CI 70.7, 84.7; n = 30, 95%CI 72.7, 82.7; n = 45, 95%CI 73.6, 81.6; n = 60, 95%CI 74.0, 81.0; n = 75, 95%CI 74.4, 80.7), but even a very small crowd size tended to produce ratings within a fairly limited range.

Two separate general linear models were used to test whether demographic characteristics (model 1) or technology use and nutrition knowledge (model 2) were associated with participant accuracy score. In model 1, race (P = .09), education (P = .15), sex (P = .23), and BMI (P = .91) were not related to participant accuracy score (F = 1.46, P = .18). In model 2, owning a smartphone or tablet (P = .77), using a fitness tracker (P = .08) or diet tracking app (P = .89), prior completion of a college-level nutrition course (P = .10), and self-assessment of nutrition knowledge (P = .58) were not associated with participant accuracy score (F = 1.14, P = .35).

Participants were also asked to rate on a scale of 1 (easy) to 9 (difficult) how they felt it would be to use four different diet self-monitoring methods. Participants rated using a Bite Counter29 that would automatically track calories as the easiest (3.2 ± 2.2), followed by using the photo crowdsourcing approach (3.5 ± 1.9), a standard diet tracking app (3.8 ± 2.0), and using a calorie book (5.8 ± 2.3).

Discussion

The present study found that users without extensive formal nutrition training could be quickly trained to provide somewhat accurate dietary feedback based on the TLD,18 which resulted in the majority of each food being categorized as the correct color. Because users frequently categorized each food correctly, this suggests that users receiving feedback on their food choices using this method would receive accurate feedback on their diet (e.g. number of red, yellow, or green foods). The study also found that even a very small crowd size tends to produce ratings within similar ranges as those produced by larger crowd sizes. Crowdsourcing holds promise as an inexpensive, low-burden method to address public health issues.34 Although crowdsourcing has been used in some areas of health, such as radiology,35 pathology,35 and dermatology,36 it has not been widely used in public health.34 Dietary self-monitoring requires daily recording of meals, and individuals often find the process to be burdensome, time-consuming, and tedious.37,38 Although research has shown that accuracy is not as important as frequency of self-monitoring for weight loss39 and that crowdsourcing has demonstrated high accuracy levels in other areas of public health,35,40 there has been little research in assessing the accuracy of dietary intake feedback provided via crowdsourcing.

One such study that examined the accuracy of crowdsourced dietary data looked at the use of the diet tracking Eatery™ app.15 As discussed previously, the Eatery app used crowdsourcing to provide very rudimentary dietary feedback to users. Researchers found that compared to trained raters, users of the app could provide highly accurate feedback on the foods in the photos.15 In addition, an app (PlateMate) was developed by researchers that used food photography and crowdsourcing to provide feedback on calories. Estimated calories by crowdsourcing (MTurk) were comparable to those by three expert raters (registered dietitians), with the error rate for the trained raters averaging 172 kcals or 28.7% per photograph versus 198 kcals or 33.2% per photograph for crowdsourced feedback.41

The present paper has several strengths. The TLD is an evidence-based way to categorize meals to assist with weight loss,16 and this study is the first examination using the TLD for crowdsourced feedback. Study participants had equal representation of males and females. More than half of participants reported they were actively attempting weight loss, which is higher than what has been reported for normal-weight populations but similar to what is seen in overweight populations.42,43 There were also limitations with this study. Participants were mostly white. Only single food items, and no beverages or mixed dishes (besides spaghetti and sauce), were included in each photo. This study also did not assess engagement over time with this type of dietary assessment. Others who have examined mobile app approaches,6,8 as well as crowdsourcing photo approaches,44 have found that engagement with self-monitoring tends to decline over time.

Implications for research and practice

There is a need to make dietary self-monitoring more engaging and less burdensome.45 Because feedback from crowdsourcing relies on the agreement of the majority, this method holds promise as a low-burden approach to providing diet-quality feedback, while also building in gamification and social networking, supporting aspects that may make this approach to dietary self-monitoring more engaging. In addition, using food photography and crowdsourcing for dietary self-monitoring may be an appealing approach for registered dietitians to use with their patients and clients. A practitioner interface could allow nutritionists to view foods consumed and numbers of red, yellow, and green foods eaten each day. Future research should examine if a more detailed training on the TLD would improve accuracy and also explore if this approach can increase engagement in dietary self-monitoring over time, improve dietary quality, and assist individuals with achieving a healthy body weight.

Acknowledgements

The authors would like to thank the Amazon Mechanical Turk participants for completing this study.

Contributorship

GTM, DSM, ERM, and AWH conceived the project. Data acquisition and interpretation were conducted by GTM and ATK. GTM and BEH performed the statistical analyses and implemented required custom software. All authors wrote the manuscript, and were responsible for the research concept and design as well as critical revision of the manuscript, and approved the final version.

Declaration of Conflicting Interests

Authors ERM and AWH have formed the company “Bite Technologies” to market and sell a bite counting device. Clemson University owns a US patent for intellectual property known as “The Weight Watch”, USA, Patent No. 8310368, filed January 2009, granted November 13, 2012. Bite Technologies has licensed the method from Clemson University. ERM and AWH receive royalty payments from bite counting device sales. No other authors have any conflicts to declare.

Ethical approval

The Institution Review Board of the University of South Carolina approved this study.

Funding

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Cancer Institute of the National Institutes of Health (award number R21CA18792901A1). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Guarantor

GTM.

Peer review

This manuscript was reviewed by Melanie Warziski Turk and Jing Wang, University of Pittsburgh.

References

  • 1.Burke LE, Wang J, Sevick MA. Self-monitoring in weight loss: a systematic review of the literature. J Am Diet Assoc 2011; 111(1): 92–102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Linde JA, Jeffery RW, French SA, et al. Self-weighing in weight gain prevention and weight loss trials. Ann Behav Med 2005; 30(3): 210–216. [DOI] [PubMed] [Google Scholar]
  • 3.Warziski M, Sereika S, Styn M, et al. Changes in self-efficacy and dietary adherence: the impact on weight loss in the PREFER study. J Behav Med 2008; 31(1): 81–92. [DOI] [PubMed] [Google Scholar]
  • 4.Turk M, Elci O, Wang J, et al. Self-monitoring as a mediator of weight loss in the SMART randomized clinical trial. Int J Behav Med 2013; 20(4): 556–561. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Venditti E, Kramer M. Necessary components for lifestyle modification interventions to reduce diabetes risk. Curr Diabetes Rep 2012; 12(2): 138–146. [DOI] [PubMed] [Google Scholar]
  • 6.Carter MC, Burley VJ, Nykjaer C, et al. Adherence to a smartphone application for weight loss compared to website and paper diary: pilot randomized controlled trial. J Med Internet Res 2013; 15(4): e32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Turner-McGrievy G, Tate D. Tweets, apps, and pods: results of the 6-month Mobile Pounds Off Digitally (Mobile POD) randomized weight-loss intervention among adults. J Med Internet Res 2011; 13(4): e120. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Turner-McGrievy GM, Beets MW, Moore JB, et al. Comparison of traditional versus mobile app self-monitoring of physical activity and dietary intake among overweight adults participating in an mHealth weight loss program. J Am Med Inform Assoc 2013; 20(3): 513–518. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Stumbo PJ. New technology in dietary assessment: a review of digital methods in improving food record accuracy. Proc Nutr Soc 2013; 72(1): 70–76. [DOI] [PubMed] [Google Scholar]
  • 10.Martin CK, Nicklas T, Gunturk B, et al. Measuring food intake with digital photography. J Human Nutrition Dietetics 2014; 27: 72–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Arens-Volland AG, Spassova L, Bohn T. Promising approaches of computer-supported dietary assessment and management—Current research status and available applications. Int J Med Inf 2015; 84(12): 997–1008. [DOI] [PubMed] [Google Scholar]
  • 12.Lieffers JR, Hanning RM. Dietary assessment and self-monitoring with nutrition applications for mobile devices. Can J Diet Pract Res 2012; 73(3): e253–260. [DOI] [PubMed] [Google Scholar]
  • 13.Burke LE, Styn MA, Sereika SM, et al. Using mHealth technology to enhance self-monitoring for weight loss: a randomized trial. Am J Prev Med 2012; 43(1): 20–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Parvanta C, Roth Y, Keller H. Crowdsourcing 101: a few basics to make you the leader of the pack. Health Promotion Practice 2013; 14(2): 163–167. [DOI] [PubMed] [Google Scholar]
  • 15.Turner-McGrievy G, Helander E, Kaipainen K, et al. The use of crowdsourcing for dietary self-monitoring: crowdsourced ratings of food pictures are comparable to ratings by trained observers. J Am Med Inform Assoc 2015; 22(e1): e112–119. [DOI] [PubMed]
  • 16.Epstein LH, Valoski A, Wing RR, et al. Ten-year follow-up of behavioral, family-based treatment for obese children. JAMA 1990; 264(19): 2519–2523. [PubMed] [Google Scholar]
  • 17.Epstein LH, Valoski A, Wing RR, et al. Ten-year outcomes of behavioral family-based treatment for childhood obesity. Health Psychol 1994; 13(5): 373–383. [DOI] [PubMed] [Google Scholar]
  • 18.Academy of Nutrition and Dietetics. Evidence Analysis Library. Traffic Light Diet and similar approaches, http://www.andeal.org/topic.cfm?cat=1429&highlight=Traffic%20Light%20Diet%20and%20similar%20approaches&home=1 (2006, accessed 10 June 2016).
  • 19.Williamson DA, Walden HM, White MA, et al. Two-year internet-based randomized controlled trial for weight loss in African-American girls. Obesity 2006; 14(7): 1231–1243. [DOI] [PubMed] [Google Scholar]
  • 20.LaCaille LJ, Schultz JF, Goei R, et al. Go!: results from a quasi-experimental obesity prevention trial with hospital employees. BMC Public Health 2016; 16(1): 1–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Thorndike AN, Riis J, Sonnenberg LM, et al. Traffic-light labels and choice architecture. Am J Prev Med 2014; 46(2): 143–149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Olstad DL, Vermeer J, McCargar LJ, et al. Using traffic light labels to improve food selection in recreation and sport facility eating environments. Appetite 2015; 91: 329–335. [DOI] [PubMed] [Google Scholar]
  • 23.Roberto CA, Bragg MA, Schwartz MB, et al. Facts up front versus traffic light food labels: a randomized controlled trial. Am J Prev Med 2012; 43(2): 134–141. [DOI] [PubMed] [Google Scholar]
  • 24.Rosal MC, White MJ, Restrepo A, et al. Design and methods for a randomized clinical trial of a diabetes self-management intervention for low-income Latinos: Latinos en Control. BMC Med Res Methodol 2009; 9(1): 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Enax L, Hu Y, Trautner P, et al. Nutrition labels influence value computation of food products in the ventromedial prefrontal cortex. Obesity 2015; 23(4): 786–792. [DOI] [PubMed] [Google Scholar]
  • 26.Buhrmester M, Kwang T, Gosling SD. Amazon’s Mechanical Turk: a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 2011; 6(1): 3–5. [DOI] [PubMed] [Google Scholar]
  • 27.Kaczynski AT, Wilhelm Stanis SA, Hipp JA. Point-of-decision prompts for increasing park-based physical activity: a crowdsource analysis. Prev Med 2014; 69: 87–89. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Mason W and Watts DJ. Financial incentives and the “performance of crowds”. In: HCOMP '09 Proceedings of the ACM SIGKDD workshop on human computation, Paris, France, 2009, pp. 77–85.
  • 29.Scisco JL, Muth ER, Dong Y, et al. Slowing bite-rate reduces energy intake: an application of the bite counter device. J Am Diet Assoc 2011; 111(8): 1231–1235. [DOI] [PubMed] [Google Scholar]
  • 30.Dong Y, Hoover A, Scisco J, et al. A new method for measuring meal intake in humans via automated wrist motion tracking. Appl Psychophysiol Biofeedback 2012; 37(3): 205–215. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Mifflin MD, St Jeor ST, Hill LA, et al. A new predictive equation for resting energy expenditure in healthy individuals. Am J Clin Nutr 1990; 51(2): 241–247. [DOI] [PubMed] [Google Scholar]
  • 32.Scisco J, Muth E, Hoover A. Examining the utility of a bite-count based measure of eating activity in free-living humans. J Acad Nutrition Dietetics 2013; 114(3): 464–469. [DOI] [PubMed] [Google Scholar]
  • 33.Salley J. Accuracy of a bite-count based calorie estimate compared to human estimates with and without calorie information available. Paper no. 1680, MS Thesis, Clemson University, USA, 2013, http://tigerprints.clemson.edu/all_theses/1680 (accessed 19 June 2016).
  • 34.Brabham DC, Ribisl KM, Kirchner TR, et al. Crowdsourcing applications for public health. Am J Prev Med 2014; 46(2): 179–187. [DOI] [PubMed] [Google Scholar]
  • 35.Ranard B, Ha Y, Meisel Z, et al. Crowdsourcing—harnessing the masses to advance health and medicine, a systematic review. J Gen Intern Med 2014; 29(1): 187–203. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Armstrong A, Cheeney S, Wu J, et al. Harnessing the power of crowds. Am J Clin Dermatol 2012; 13(6): 405–416. [DOI] [PubMed] [Google Scholar]
  • 37.Burke LE, Warziski M, Starrett T, et al. Self-monitoring dietary intake: current and future practices. J Ren Nutr 2005; 15(3): 281–290. [DOI] [PubMed] [Google Scholar]
  • 38.Burke LE, Conroy MB, Sereika SM, et al. The effect of electronic self-monitoring on weight loss and dietary intake: a randomized behavioral weight loss trial. Obesity (Silver Spring) 2011; 19(2): 338–344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Yon BA, Johnson RK, Harvey-Berino J, et al. The use of a personal digital assistant for dietary self-monitoring does not improve the validity of self-reports of energy intake. J Am Diet Assoc 2006; 106(8): 1256–1259. [DOI] [PubMed] [Google Scholar]
  • 40.Ilakkuvan V, Tacelosky M, Ivey KC, et al. Cameras for public health surveillance: a methods protocol for crowdsourced annotation of point-of-sale photographs. JMIR Res Protoc 2014; 3(2): e22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Noronha J, Hysen E, Zhang H, et al. PlateMate: crowdsourcing nutritional analysis from food photographs. In: Proceedings of the 24th annual ACM symposium on user interface software and technology, Santa Barbara, CA, 2011, pp. 1–12.
  • 42.Kruger J, Galuska DA, Serdula MK, et al. Attempting to lose weight: specific practices among U.S. adults. Am J Prev Med 2004; 26(5): 402–406. [DOI] [PubMed] [Google Scholar]
  • 43.Yoong SL, Carey ML, Sanson-Fisher RW, et al. A cross-sectional study assessing the self-reported weight loss strategies used by adult Australian general practice patients. BMC Fam Pract 2012; 13: 48. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Helander E, Kaipainen K, Korhonen I, et al. Factors related to sustained use of a free mobile app for dietary self-monitoring with photography and peer feedback: retrospective cohort study. J Med Internet Res 2014; 16(4): e109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Burke LE, Swigart V, Warziski Turk M, et al. Experiences of self-monitoring: successes and struggles during treatment for weight loss. Qual Health Res 2009; 19(6): 815–828. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Digital health are provided here courtesy of SAGE Publications

RESOURCES