Table 2.
Reference | Focus | N participants | Age | Input data/device used | Method used | Dataset |
---|---|---|---|---|---|---|
Leo et al.47 | Facial expression for quantitative assessment | 17 ASD, 10 TD | 6–13 years | Image sequences | Deep learning | Own dataset |
Kalantarian et al.36 | Facial emotion for mobile games | 8 ASD | 6–12 years | Mobile phone | Ensemble classification (AWS + Sighthound + Azure) | Own dataset |
Kalantarian et al.37 | Facial expression for quantitative assessment | 8 ASD, 5 TD |
ASD: 8.5 ± 1.85 TD: 4.4 ± 0.54 (in years) |
Video, mobile phone | Histogram of Oriented Gradients (HOG) + SVM | Own dataset |
Han et al.38 | Emotional expression recognition | 25 ASD | Camera | Deep learning, CNN | 128,129 | |
Tang et al.39 | Automatic smile detection | 11 ASD, 23 TD | 6–24 months | Video, two wireless cameras | Deep learning, CNN | GENKI-4K, CelebA132, RCLA&NBH Smile |
Daniels et al.40 | Emotion recognition for assistive technology | 23 ASD, 20 TD | 6–17 years | Google Glass | n/a | |
Jazouli et al.41 | Emotion recognition for assistive technology | 10 ASD | 3D image, Microsoft Kinect | Own dataset | ||
Washington et al.42 | Emotion recognition for assistive technology | 14 ASD | 9.57 months [3.37. 4–15] | Video/Google Glass and mobile phone | Machine learning, Histogram of Gradients (HOG) + SVM | 128,139–143 |
Voss et al.43 | Emotion recognition for assistive technology | 20 ASD, 20 TD | Video/Google Glass and mobile phone | Machine learning, Histogram of Gradients (HOG) + SVM | n/a | |
Vahabzadeh et al.44 | Emotion recognition for assistive technology | 8 ASD | 11.7–20.5 years | Video, Google Glass | n/a | |
Leo et al.45 | Emotion recognition for behaviour monitoring | 3 ASD | Video, Robokind R25 Robot | 128 | ||
Pan et al.46 | Facial emotion for behaviour analysis | 2 ASD | Video, NAO robot | Own dataset | ||
Coco et al.48 | Facial expression analysis for diagnosis | 5 ASD, 5 TD | 65.38 months [15.86, 48–65 months] | Video, webcam | Deep learning, Histogram of Oriented Gradients (HOG) feature combined with a linear classifier, CNN | DISFA [24], SEMAINE [26] and BP4D [34] datasets. |
Leo et al.49 | Facial expression for quantitative assessment | 17 ASD | 6–13 years | Image sequences | Deep learning | Own dataset |
Samad et al.50 | 3D facial imaging for physiology-based impairment detection | 8 ASD, 8 TD | 7–20 years | 3D images, high resolution 3D facial imaging sensor, 3dMD | n/a | |
Leo et al.51 | Facial expression recognition for assistive technology | 1 ASD, 1 TD | Video | Deep learning, Facial Action Coding System (FACS) | Own dataset | |
Guha et al.52 | Facial expression for quantitative assessment | 20 ASD, 19 TD | 9–14 years | Motion capture data, 6 infra-red motion-capture cameras | Deep learning, Facial Action Coding System (FACS) | Own dataset |
Ahmed and Goodwin53 | Facial expression for predicting engagement and learning performance | 7 ASD | 8–19 years | Video, camera | Computer Expression Recognition Toolbox | Own dataset |
Harrold et al.54 | Facial expression for assistive technology | 2 ASD, 4 TD | 8–10 years | Video, Apple iPad | n/a | |
Harrold et al.55 | Facial expression for assistive technology | 2 ASD, 4 TD | 8–10 years | Video, Apple iPad | n/a | |
White et al.56 | Facial emotion expression and recognition | 20 ASD, 20 TD | 9–12 years | 3D data, Microsoft Kinect | n/a | |
Garcia-Garcia et al.57 | Facial expression for learning emotional intelligence | 3 ASD | 8–10 years | Video, mobile phone | Affectiva SDK | n/a |
Jain et al.58 | Facial expression recognition for assistive technology | 6 ASD | 5–12 years | Video, webcam | 128 | |
Li et al.59 | Facial attributes for ASD classification | 49 ASD, 39 TD | Video, Apple iPad | Deep learning, CNN |
Training: AffectNet133 and EmotioNet134 Evaluation: Own dataset |
|
Shukla et al.60 | Facial image analysis for diagnosis | 91 ASD, 1035 NDD, 1126 TD | Image, camera | Deep learning, CNN | Own dataset |