Table 2.
Functions of AI in self-regulation of weight management in healthy and overweight populations (n 66)
| Author, year | Self-regulation use case | AI features | AI functions | Data collection instrument | Data type | Important results |
|---|---|---|---|---|---|---|
| Alshurafa et al., 2015 | Self-monitoring (eating behaviour) | Gesture recognition | To distinguish between food types | Necklace piezoelectric sensor (vibration sensor) | Skin motion when swallowing | Successfully distinguished between liquids and solids (F-measure > 90 %), hot and cold drinks (F-measure > 90 %) and between solid food types (F-measure ∼80 %) |
| Amft et al., 2008 | Self-monitoring (eating behaviour) | Gesture and sound recognition | To recognise dietary activities using on-body sensors | On-body sensors: (1) to detect arm movements: inertial motion sensors (three-dimensional acceleration, gyroscope and magnetometers) at the wrist and upper back (integrated into a jacket); (2) to detect chewing: ear microphone located inside the ear canal to detect bone-conducted food breakdown sounds and (3) to detect swallowing: collar sensor containing surface electromyography (EMG) electrodes and a stethoscope microphone |
Arm movements, chewing cycle sounds and swallowing | Four intake gestures from arm movements and two food groups from chewing cycle sounds were detected and identified with a recall of 80–90 % and a precision of 50–64 %. The detection of individual swallows resulted in 68 % recall and 20 % precision. Sample-accurate recognition rates were 79 % for movements, 86 % for chewing and 70 % for swallowing |
| Amft et al., 2009 | Self-monitoring (eating behaviour) | Sound recognition | To evaluate the prediction of food weight in individual bites using an ear-pad chewing sound sensor | Ear-pad chewing sound sensor | Chewing cycles and food type | Sound-based chewing recognition achieved recalls of 80 % at 60–70 % precision. Food classification of chewing sequences had an average accuracy of 94 %. Mean weight prediction error was lowest for apples (19·4 %) and the largest for lettuce (31 %) |
| Arif et al., 2017 | Self-monitoring (physical activity) | Gesture recognition | Recognise physical activity type | Rotation forest classifier | Inertial measurement units (IMUs) placed at wrist, chest and ankles | Accurately (98 %) identified seventeen physical activities ranging from ambulation to activities of daily living |
| Aswani et al., 2019 | Self-monitoring (eating behaviour; physical activity), optimise goal setting | Predictive analytics | To predict weight loss based on subject characteristics, step count, energy intake and counselling sessions | Bayesian classification | Secondary data | Predictive modelling framework was competitive in terms of prediction accuracy with linear SVM, logistic regression and decision tree models, which further justified the use of the utility-maximising framework and its ability to capture ‘irrational’ discounting in the decision-making of individuals participating in the intervention |
| Aziz et al., 2020 | Self-monitoring (physical activity and energy expenditure) | Gesture recognition | To estimate energy expenditure during sitting, standing and treadmill walking using a smartwatch | LG Urbane Android smartwatch: tri-axial accelerometer, gyroscope and magnetometer | Physical movement | The activity-based models provided 7 % better energy expenditure estimation than the traditional acceleration-based models |
| Bastian et al., 2015 | Self-monitoring (physical activity) | Gesture recognition | To discriminate between eight activity classes (lying, slouching, sitting, standing, walking, running and cycling) in a laboratory condition and walking the streets, running, cycling and taking the bus in free-living conditions | Hip-worn triaxial accelerometer | Physical movement | The performances of the laboratory-calibrated algorithm decreased for several activities when applied to free-living data. Recalibrating the algorithm with data closer to real-life conditions improved the detection of overall sitting (sensitivity: laboratory model 24·9 %; recalibrated model 95·7 %). |
| Bi et al., 2016 | Self-monitoring (eating behaviour) | Sound recognition | To monitor and recognise food intakes in daily life |
High-fidelity microphone is worn on the subject’s neck near the jaw | Acoustic signals | The accuracy of food-type recognition by AutoDietary is 84·9 %, and those to classify liquid and solid food intakes are up to 97·6 and 99·7 %, respectively. |
| Bouarfa et al., 2014 | Self-monitoring (physical activity, energy expenditure) | Gesture recognition | Energy expenditure estimation under free-living conditions | A single ear-worn activity recognition (eAR) sensor (built-in triaxial accelerometer) | Physical movement | In free-living settings, ten different types of physical activities (i.e., lying down, standing, computer work, vacuuming, going up and downstairs, slow walking, brisk walking, slow running, fast running and cycling) were predicted |
| Chung et al., 2018 | Self-monitoring (eating behaviour and physical activity) | Gesture recognition | To detect the patterns of temporalis muscle activities during food intake and other physical activities | A glasses-type device with an in-built EMG or piezoelectric strain sensor and attaching them directly onto human skin | Skin movement | The average F1 score of the classification among the featured activities was 91·4 %. |
| Dijkhuis et al., 2018 | Self-monitoring (physical activity), optimise goal setting | Predictive analytics | To predict the likelihood of a user to achieve a daily personal step goal | Wrist-worn activity tracker, the Fitbit Flex | Physical activity | In 80 % of the individual cases, the random forest algorithm was the best performing algorithm |
| Dobbins et al., 2017 | Self-monitoring (physical activity) | Gesture recognition | To distinguishing physical activity | Tri-axial accelerometers and a heart-rate monitor | Physical movement | The results showed an improvement in recognition accuracy as compared with existing studies, with accuracies of up to 99 % and sensitivities of 100 % |
| Dong et al., 2014 | Self-monitoring (eating behaviour) | Gesture recognition | To automatically detect periods of eating (free-living condition) | iPhone 4 (accelerometer and gyroscope) placed inside a pouch, wrapped snugly around the forearm | Wrist motion (linear and rotational) | Results show an accuracy of 81 % for detecting eating at 1 s resolution |
| Ermes et al., 2008 | Self-monitoring (physical activity) | Gesture recognition | To recognise sports performed by the subjects | 3-D accelerometers on hip and wrist and GPS information | Physical movement | The total accuracy of the activity recognition using both supervised and unsupervised data was 89 % that was only 1 % unit lower than the accuracy of activity recognition using only supervised data. The accuracy decreased by 17 % unit when only supervised data were used for training and only unsupervised data for validation. |
| Everett et al., 2018 | Self-monitoring (physical activity), self-control (physical activity), optimise goal setting | Real-time analytics with personalised micro-interventions | Automatically translate various raw mobile phone data into insights about user’s life habits. Provide personalised, contextual, just-in-time, just-in place recommendations (tailored messages) | Phone accelerometer | Physical movement | Physical activity increased by 2·8 metabolic equivalents of task (MET) – hours per week, weight reduced by 1·6 kg (P < 0·001) (˜2 %). BMI declined by 0·6 kg/m2 (P < 0·001), and waist circumference was reduced by 1·4 cm (P < 0·01) |
| Fontana et al., 2014 | Self-monitoring (eating behaviour) | Gesture recognition | To objectively monitor ingestive behaviour in free-living | A jaw motion sensor, a hand gesture sensor and an accelerometer (integrated into a device and wirelessly interface to a smartphone) | Jaw and hand motion | The system was able to detect food intake with an average accuracy of 89·8 % |
| Forman et al., 2018 | Self-monitoring (eating behaviour), self-control (eating behaviour), optimise goal setting | Real-time analytics with personalised micro-interventions | Predict dietary lapses and deliver a targeted intervention designed to prevent the lapse from occurring | Ensemble methods (combining weighted vote of predictions from random forest, Logit. Boost, Bagging, Random Subspace, Bayes Net) | Ecological momentary assessment (EMA) 6 times a day + ad hoc entry of lapse event | Of the twenty-one possible triggers, 29 % of intervention triggers were based on time of day, 16·7 % based on low motivation and 10 % based on fatigue. The remaining eighteen triggers were identified as risk factors < 10 % of the time. There was a reduction in unplanned lapses. Participants averaged a 3·13 % weight loss |
| Forman et al., 2019a | Self-monitoring (eating behaviour), self-control (eating behaviour), optimise goal setting | Real-time analytics with personalised micro-interventions | Predict dietary lapses and deliver a targeted intervention designed to prevent the lapse from occurring | Ensemble methods (e.g., combining weighted vote of predictions from Random Forest, Logit. Boost, Bagging, Random Subspace, Bayes Net) | EMA 6 times a day + ad hoc entry of lapse event | Weight Watcher (WW) + OnTrack (OT) participants reported an average of 29·72 (sd = 29·11) lapses during the 10-week study period, and the frequency decreased through time. Weight losses were greater for WW + OT (M = 4·7 %, se = 0·55) than for WW (M = 2·6 %, se = 0·80) |
| Forman et al., 2019b | Self-monitoring (eating behaviour), self-control (eating behaviour), optimise goal setting | Real-time analytics with personalised micro-interventions | Predict dietary lapses and deliver a targeted intervention designed to prevent the lapse from occurring | Reinforcement learning algorithm | Physical activity measured in minutes of moderate-to-vigorous physical activity (MVPA) using Fitbit Flex or a similar type of Fitbit wrist-worn activity tracker | Proposed system achieved weight losses equivalent to existing human coaching programmes (non-optimised (NO) = 4·42 %, individually optimised (IO) = 4·56 %, group-optimised (GO) = 4·39 %) at roughly one-third the cost (1·73 and 1·77 coaching hours/participant for IO and GO, v. 4·38 for NO) |
| Fullerton et al., 2017 | Self-monitoring (physical activity) | Gesture and image recognition | To recognise activity and sub-category activity types through the use of multiple body-worn accelerometers in a free-living environment | Nine body-worn accelerometers for a day of free living | Physical movement | The recognition accuracy of 97·6 %. Controlled and free-living testing provided highly accurate recognition for sub-category activities (> 95·0 %). Decision tree classifiers and maximum features demonstrated to have the lowest computing time |
| Goldstein et al., 2018 | Self-monitoring (eating behaviour) | Predictive analytics | Predict dietary lapses | Ensemble methods (combining weighted vote of predictions from random forest, Logit. Boost, Bagging, Random Subspace, Bayes Net) | EMA 6 times a day + ad hoc entry of lapse event | Participants responded to an average of 94·6 % of EMA prompts (range = 85·2–98·9 %) and compliance remained relatively stable throughout the study |
| Goldstein et al., 2020 | Self-monitoring (eating behaviour), self-control (eating behaviour) optimise goal setting | Real-time analytics with personalised micro-interventions | To measure dietary lapses and relevant lapse triggers and provide personalised intervention using machine learning | Decision tree | EMA 6 times a day + ad hoc entry of lapse event | Average of 4·36 lapses per week (sd = 1·46). Participants lost an average of 2·6 % of their starting weight at mid-treatment and 3·4 % at end-of-treatment |
| Hegde et al., 2017 | Self-monitoring (physical activity) | Gesture recognition | To propose an insole-based activity monitor – SmartStep, designed to be socially acceptable and comfortable | Insole-based sensor system: contains a 3D accelerometer, a gyroscope The wrist sensor was worn on the wrist of the dominant hand like a wristwatch The ActivPal (AP), a commercially available positional sensor module worn on the thigh as a criterion measure during the free-living study. It classifies individuals’ activities into periods spent sedentary, standing and stepping |
Physical movement | The overall agreement with ActivPAL was 82·5 % (compared with 97 % for the laboratory study). The SmartStep scored the best on the perceived comfort reported at the end of the study |
| Hezarjaribi et al., 2018 | Self-monitoring (energy intake) | Speech recognition | To facilitate nutrition monitoring using speech recognition and text mining | Smartphone microphone | Speech | Speech2Health achieves an accuracy of 92·2 % in computing energy intake |
| Hossain et al., 2020 | Self-monitoring (eating behaviour) | Image recognition | To detect and count bites and chews automatically from meal videos | Video recorder | Video images | Mean accuracy of 85·4 % ± 6·3 % concerning manual annotation was obtained for the number of bites and 88·9 % (± 7·4 %) for the number of chews |
| Hua et al., 2020 | Self-monitoring (physical activity) | Gesture recognition | To classify nine different upper extremity exercises | Triaxial IMU | Physical movement (kinematics) | Random forest models with flattened kinematic data as a feature had the greatest accuracy (98·6 %). Using the triaxial joint range of motion as the feature set resulted in decreased accuracy (91·9 %) with faster speeds |
| Huang et al., 2017 | Self-monitoring (eating behaviour) | Gesture recognition | Recognise eating behaviour and food type | On-board real-time decision algorithm; chewing detection algorithm; decision trees | Electromyography embedded in wearable glasses connected to the smartphone | 96 % accuracy in detecting chewing and classifying five types of food |
| Jain et al., 2018 | Self-monitoring (physical activity) | Gesture recognition | To classify activities using built-in sensors of smartphones | The phone kept in the front pocket of the subject’s trousers, built-in accelerometer and gyroscope sensor | Physical movement | Average activity classification accuracy achieved using the proposed method was 97·12 % |
| Jiang et al., 2020 | Self-monitoring (energy intake) | Image recognition | To develop a deep model-based food recognition and dietary assessment system to study and analyse food items from daily meal images (e.g., captured by smartphone) | Existing datasets | Images | The system was able to recognise food items accurately with top-1 accuracy of 71·7 % and top-5 accuracy of 93·1 % |
| Juarascio et al., 2020 | Self-monitoring (eating behaviour) | Predictive analytics | To detect changes in HRV to in turn detect the risk of experiencing an emotional eating episode in an ecologically valid setting | Empatica E4 wrist-sensor (photoplethysmography: non-invasive the optical measurement that can derive cardiovascular features from light absorption of the skin) and EMA six prompts per day (participants were also instructed to self-report immediately following an emotional eating episode and answer the same questions) | Heart rate variability (HRV) | Support vector machine (SVM) models using frequency-domain features achieved the highest classification accuracy (77·99 %), sensitivity (78·75 %) and specificity (75·00 %), though were less accurate at classifying episodes (accuracy 63·48 %, sensitivity 62·68 % and specificity 70·00 %) and did not meet acceptable classification accuracy |
| Kang et al., 2019 | Self-monitoring (physical activity, energy expenditure) | Gesture recognition | To predict the energy expenditure of physical activities | AirBeat system: built-in patch-type sensor module for wireless monitoring of heart rate, exercise index, ECG and a three-axial acceleration motion detector | Physical movement, HR, exercise index, ECG, humidity, and temperature | RMSE of 0·1893 and R 2 of 0·91 for the energy expenditures of aerobic and anaerobic exercises |
| Kim et al., 2015 | Self-monitoring (physical activity) | Gesture and image recognition | To recognise sedentary behaviour | Two accelerometers (waist over the right hip and right thigh) and a wearable camera (around the neck using a lanyard) | Physical movement | ActivPAL showed the most accurate estimate of total sedentary time with MAPE of 4·11 % and percentage of bias of –3·52 % |
| Korpusik et al., 2017 | Self-monitoring (energy intake) | Speech recognition | To automatically extract food concepts (nutrients and energetic intake) from a user’s spoken meal description | Amazon Mechanical Turk (AMT) where Turkers were to record ten meal descriptions | Speech | 83 % semantic tagging accuracy |
| Kyritsis et al., 2019 | Self-monitoring (eating behaviour) | Gesture recognition | To automatically detect in-meal food intake cycles using the inertial signals (acceleration and orientation velocity) from an off-the-shelf smartwatch | Off-the-shelf smartwatch (acceleration and orientation velocity) | In-meal bite detection | Achieved the highest F1 detection score (0·913 in the leave-one subject-out experiment) as compared with existing algorithms |
| Lin et al., 2012 | Self-monitoring (physical activity, energy expenditure) | Gesture recognition | To recognise physical activities and their corresponding energy expenditure | Motion sensors and an ECG sensor | Physical movement and ECG | Recognition accuracies using decision trees in the cross-validations ranged from 95·52 to 97·70 % |
| Lin et al., 2019 | Self-monitoring (physical activity, energy expenditure) | Image recognition | To estimate energy expenditure of physical activity in gyms | Kinect for XBOX 360 sensors (depth and motion sensing) | Physical movement (Kinect skeletal data) | The measured and predicted metabolic equivalents of task exhibited a strong positive correlation |
| Liu et al., 2012 | Self-monitoring (physical activity, energy expenditure) | Gesture recognition | To recognise the physical activity | Two triaxial accelerometers (at hip and wrist), and one ventilation sensor secured to the abdomen (AB) at the level of the umbilicus | Ventilation (abdomen), motion (hip), motion (wrist) | Correctly recognised the thirteen activity types 88·1 % of the time, which is 12·3 % higher than using a hip accelerometer alone. Also, the method predicted energy expenditure with a root mean square error of 0·42 MET, 22·2 % lower than using a hip accelerometer alone |
| Liu et al., 2015 | Self-monitoring (physical activity and energy expenditure), self-control (eating, physical activity), optimise goal setting | Gesture recognition and real-time analytics with personalised micro-interventions (1 subject) | To recognise physical activity and provide health feedback | Android phone with built-in accelerometer and magnetometer | Nine basic daily physical activities: walking, jogging, ascending and descending stairs, bicycling, travelling up in an elevator, travelling down in an elevator, using an escalator and remaining stationary | Achieved an average recognition accuracy of 98·0 % with a minimised energy expenditure |
| Liu et al., 2018 | Self-monitoring (eating behaviour) | Image recognition | To recognise food items | Camera on smartphones | Food type | (1) outperformed existing work in terms of food recognition accuracy (top-1: 77·5 %; top-5: 95·2 %); (2) reduced response time that is equivalent to the minimum of the existing approaches and (3) lowered energy consumption which is close to the minimum of the state of the art |
| Lo et al., 2020 | Self-monitoring (eating behaviour) | Image recognition | To estimate the portion size of food items consumed | Camera | Portion size (often commonly seen food categories including burger, fried rice, pizza, etc. Each category has twenty food models with different shape geometries and portion size) | Mean accuracy of up to 84·68 % |
| Lopez-Meyer et al., 2010 | Self-monitoring (eating behaviour) | Image recognition | To describe the detection of food intake by a support vector machine classifier trained on-time history of chews and swallows | Videotaped by a camcorder to capture subject activity | Chewing and swallowing | The highest accuracy of detecting food intake (94 %) was achieved when both chews and swallows were used as predictors |
| Mo et al., 2012 | Self-monitoring (physical activity, energy expenditure) | Gesture recognition | To estimate energy expenditure | Wireless wearable multi-sensor integrated measurement system (WIMS): two triaxial accelerometers, worn at the hip and wrist | Body motion and breathing | Under free-living conditions, WIMS correctly recognised the activity intensity level 86 % of the time |
| Montoye et al., 2016 | Self-monitoring (physical activity) | Gesture recognition | To recognise physical activity type | Oxycon Mobile portable metabolic analyser and four accelerometer-based activity monitors | Physical movement | Overall classification accuracy for assessing activity type was 66–81 % for accelerometers mounted on the hip, wrists and thigh, which improved to 73–87 % when combining similar activities into categories. The wrist-mounted accelerometers achieved the highest accuracy for individual activities (80·9–81·1 %) and activity categories (86·6–86·7 %); accuracy was not different between wrists. The hip-mounted accelerometer had the lowest accuracy (66·2 % individual activities, 72·5 % activity categories) |
| Päßler et al., 2014 | Self-monitoring (eating behaviour) | Sound recognition | To recognise chewing sounds | Microphones applied to the outer ear canal | Chewing sound | Precision and recall over 80 % were achieved by most of the algorithms |
| Parkka et al., 2010 | Self-monitoring (physical activity) | Gesture recognition | To automatically recognise the physical activity | Nokia wireless motion bands 3-D accelerometer | Physical movement | Overall accuracy was 86·6 and 94·0 % after classifier personalisation |
| Pouladzadeh et al., 2014 | Self-monitoring (energy intake) | Image recognition | To estimate food energy and nutrition | The built-in camera of smartphones or tablets | Food size, shape, colour and texture. Food portion was estimated based on the area | Accuracies in detecting single, non-mixed and mixed foods were 92·21, 85 and 35–65 %, respectively |
| Pouladzadeh et al., 2015 | Self-monitoring (eating behaviour) | Sound recognition | To estimate food energy and nutrition using a cloud-based support vector machine (SVM) method | Built-in camera of smartphones or tablets | Food size, shape, colour and texture. Food portion was estimated based on the area | By using a cloud computing system in the classification phase and updating the database periodically, the accuracy of the recognition step has increased in single food portion, a non-mixed and mixed plate of food compared with LIBSVM |
| Rabbi et al., 2015 | Self-monitoring (physical activity), self-control (physical activity), optimise goal setting | Real-time analytics with personalised micro-interventions | To automatically (1) track physical activity, (2) analyse activity and food logs to identify frequent and nonfrequent behaviours and (3) generate personalised suggestions that ask users to either continue, avoid or make small changes | Accelerometer and GPS, smartphone food logging | Four most common daily physical activities – walking, running, stationary (sitting or standing) and driving |
Physical activity increased by 2·8 metabolic equivalents of task (MET) – hours per week (sd 6·8; P = 0·02) |
| Rachakonda et al., 2020 | Self-monitoring (energy intake) | Image recognition | To automatically detect, classify and quantify the objects from the plate of the user | A camera attached to glasses | Food type, amount, time of eating | The iLog model has produced an overall accuracy of 98 % with an average precision of 85·8 % |
| Sazonov et al., 2010 | Self-monitoring (eating behaviour) | Sound recognition | To detect acoustical swallowing | Throat microphone located over laryngopharynx | Swallowing sounds | Average weighted epoch recognition accuracy for intra-visit individual models was 96·8 % which resulted in 84·7 % average weighted accuracy in detection of swallowing events |
| Sazonov et al., 2012 | Self-monitoring (eating behaviour) | Gesture recognition | To detect periods of food intake based on chewing | Piezoelectric strain gauge sensor | Jaw movement | Classification accuracy of 80·98 % and a fine time resolution of 30 s |
| Sazonov et al., 2016 | Self-monitoring (physical activity, energy expenditure) | Gesture recognition | To describe the use of a shoe-based wearable sensor system (SmartShoe) with a mobile phone for real-time recognition of various postures/physical activities and the resulting EE | Five force-sensitive resistors (integrated into a flexible insole) and an accelerometer | Physical movement | Results showed a classification accuracy virtually identical to SVM (∼95 %) while reducing the running time and the memory requirements by a factor of > 103 |
| Spanakis et al., 2017a | Self-monitoring (eating; emotions) | Predictive analytics | Analyse individual states of a person status (emotions, location, activity, etc.) and assess their impact on unhealthy eating | Classification decision trees; hierarchical agglomerative clustering | EMA 10 times a day + ad hoc entry of lapse event | Participants were clustered into six groups based on their eating behaviour and specific rules that discriminate which conditions lead to healthy v. unhealthy eating |
| Spanakis et al., 2017b | Self-monitoring (eating; emotions), self-control (eating) | Real-time analytics with personalised micro-interventions | Analyse user-specific data, highlight most discriminating patterns that lead to unhealthy eating behaviour and providing feedback (personalised warning messages before a possible unhealthy eating event) | Classification decision trees and hierarchical agglomerative clustering | EMA 6 times a day + ad hoc entry of lapse event | Participants reported on average 3·6 eating events (sd = 1·1) per day |
| Stein et al., 2017 | Self-monitoring (eating; physical activity; emotions), self-control (eating; physical activity), optimise goal setting | Real-time analytics with personalised micro-interventions | Predict dietary lapses and provide adaptive semi-individualised feedback to users regarding their eating behaviour | Used a previously used algorithm which used decision tree | Chatbot | Percentage of healthy meals increased by 31 % of total meals logged at baseline to 67 % within 21 weeks; the percentage of unhealthy meals decreased by 54 %. Users averaged 2·4 kg or 2·4 % weight loss, and 75·7 % (53/70) of users lost weight in the programme |
| Tao et al., 2018 | Self-monitoring (physical activity, energy expenditure) | Gesture and image recognition | To estimate energetic expenditure | Camera, two wearable accelerometers | Physical movement, HR, exercise index, humidity and temperature | The fusion of visual and inertial data reduces the estimation error by 8 and 18 % compared with the use of visual-only and inertial sensor only, respectively, and by 33 % compared with a MET-based approach |
| Thomaz et al., 2015 | Self-monitoring (eating) | Sound recognition | Recognise eating behaviour (chewing and biting sound from ambient noises) | SVM; nearest neighbours and random forest | Wrist-worn audio recording device | Detected eating with 86·6 % accuracy |
| Vathsangam et al., 2011 | Self-monitoring (physical activity, energy expenditure) | Gesture recognition | To estimate energy expenditure during treadmill walking | Inertial measurement unit (IMU): triple-axis accelerometer, triaxial gyroscopes | Physical movement | Combining accelerometer and gyroscope information leads to improved accuracy compared with using either sensor alone |
| Vathsangam et al., 2014 | Self-monitoring (physical activity, energy expenditure) | Gesture recognition | To detect physical activity using different features | Phone-based triaxial accelerometer | Physical movement | Feature combinations corresponding to sedentary energy expenditure, sedentary heart rate and sex alone resulted in errors that were higher than speed-based models and nearest-neighbour models. Size-based features such as BMI, weight and height produced lower errors. Weight was the best individual descriptor followed by height. |
| Walker et al., 2014 | Self-monitoring (eating behaviour) | Sound recognition | To automatically detect ingestion | Throat microphone located over laryngopharynx | Eating sound | > 94 % of ingestion sounds are correctly identified with false-positive rates around 9 % based on 10-fold cross-validation |
| Wang et al., 2019 | Self-monitoring of weight loss progress | Predictive analytics | Predict weight loss based on socio temporal context | Linear regression; stochastic gradient descent (SGD) | Secondary data | Weight loss can be predicted based on temporal-social information |
| Yunus et al., 2019 | Self-monitoring (energy intake) | Image recognition | To automatically estimate food attributes such as ingredients and nutritional value | Existing image datasets | Food type and portion | Results showed the top 1 classification rate of up to 85 % |
| Zhang et al., 2017 | Self-monitoring (eating behaviour) | Gesture and image recognition | To detect eating | Wrist-worn sensor (Microsoft Band 2-accelerometer and gyroscope) and an HD webcam camera | Eating and non-eating gestures | Results showed a correlation between feeding gesture count and energetic intake in unstructured eating (r = 0·79, P-value = 0·007) |
| Zhang et al., 2018 | Self-monitoring (physical activity) | Wi-Fi signal recognition | To recognise general physical activity | The software platform, a signal transmitter and a signal receiver | Wi-Fi signal | Results showed a recognition rate of the general presence of physical activity of 99·05 %, an average recognition rate of 92 % when detecting four common classes of activities |
| Zhou et al., 2019 | Self-monitoring (physical activity) | Predictive analytics | Predict exercise lapse | SVM; logistic regression | Secondary data | Discontinuation prediction score (DiPS) makes accurate predictions on exercise goal lapse based on short-term data. The most predictive features were steps and physical activity intensity |
| Zhou et al., 2020 | Self-monitoring (physical activity), self-control (physical activity), optimise goal setting | Real-time analytics with recommendations | Adaptively compute personalised step goals that are predicted to maximise future physical activity for each participant based on all the past steps’ data and goals of each participant | Behavioural analytics algorithm (BAA) | Phone accelerometer | Participants in the intervention group had a decrease in mean (sd) daily step count of 390 (490) steps between run-in and 10 weeks, compared with a decrease of 1350 (420) steps among control participants (n 30; P = 0·03). The net difference in daily steps between the groups was 960 steps (95 % CI 90, 1830 steps) |