Skip to main content
Facial Plastic Surgery & Aesthetic Medicine logoLink to Facial Plastic Surgery & Aesthetic Medicine
. 2020 Feb 10;22(1):42–49. doi: 10.1089/fpsam.2019.29000.gua

Toward an Automatic System for Computer-Aided Assessment in Facial Palsy

Diego L Guarin 1,,2,,*, Yana Yunusova 1,,3,,4, Babak Taati 1,,5,,6, Joseph R Dusseldorp 7, Suresh Mohan 2, Joana Tavares 8, Martinus M van Veen 2, Emily Fortier 2, Tessa A Hadlock 2, Nate Jowett 2
PMCID: PMC7362997  PMID: 32053425

Abstract

Importance: Quantitative assessment of facial function is challenging, and subjective grading scales such as House–Brackmann, Sunnybrook, and eFACE have well-recognized limitations. Machine learning (ML) approaches to facial landmark localization carry great clinical potential as they enable high-throughput automated quantification of relevant facial metrics from photographs and videos. However, the translation from research settings to clinical application still requires important improvements.

Objective: To develop a novel ML algorithm for fast and accurate localization of facial landmarks in photographs of facial palsy patients and utilize this technology as part of an automated computer-aided diagnosis system.

Design, Setting, and Participants: Portrait photographs of 8 expressions obtained from 200 facial palsy patients and 10 healthy participants were manually annotated by localizing 68 facial landmarks in each photograph and by 3 trained clinicians using a custom graphical user interface. A novel ML model for automated facial landmark localization was trained using this disease-specific database. Algorithm accuracy was compared with manual markings and the output of a model trained using a larger database consisting only of healthy subjects.

Main Outcomes and Measurements: Root mean square error normalized by the interocular distance (NRMSE) of facial landmark localization between prediction of ML algorithm and manually localized landmarks.

Results: Publicly available algorithms for facial landmark localization provide poor localization accuracy when applied to photographs of patients compared with photographs of healthy controls (NRMSE, 8.56 ± 2.16 vs. 7.09 ± 2.34, p ≪ 0.01). We found significant improvement in facial landmark localization accuracy for the facial palsy patient population when using a model trained with a relatively small number photographs (1440) of patients compared with a model trained using several thousand more images of healthy faces (NRMSE, 6.03 ± 2.43 vs. 8.56 ± 2.16, p ≪ 0.01).

Conclusions and Relevance: Retraining a computer vision facial landmark detection model with fewer than 1600 annotated images of patients significantly improved landmark detection performance in frontal view photographs of this population. The new annotated database and facial landmark localization model represent the first steps toward an automatic system for computer-aided assessment in facial palsy.

Level of Evidence: 4


Key Points

Question: Can recent developments in ML and computer vision be used to develop an objective and automatic system for computer-aided assessment in facial palsy?

Findings: In this research article, we found that by using a relatively small number of manually annotated photographs for a patient-specific database, it is possible to obtain significant improvement in the accuracy of facial measurements provided by a popular ML algorithm.

Meaning: The results presented in this article represent the first steps toward the development of an automatic system for computer-aided assessment in facial palsy.

Introduction

Management of facial neuromuscular disorders is hampered by lack of a universal and objective grading system to characterize disease severity, recovery, and response to therapeutic interventions.1 Quantifying static facial features and displacements occurring with facial expressions is a promising technique for standardizing assessment in facial palsy, whose reported U.S. incidence exceeds 150,000 cases per year.2 Several methods exist for measuring facial features and movements. Caliper assessments offer high accuracy yet are tedious and must be performed in person. Computer-based techniques to quantify facial displacements are now widely employed.3–7 Early approaches comprised manual identification of facial landmarks on digital images within specialized software, from which relevant distances and angles could be readily calculated. Although such techniques enabled retrospective assessment of facial function, manual tagging of digital images is resource intensive, error prone, and infeasible for dynamic tracking of facial movements from video. To automate measurement of facial displacements, physical markers placed at specific facial landmarks have been employed, and their location was tracked using customized software and hardware.6 Marker-based tracking is limited as manual marker localization is time consuming, subjective, and requires specialized recording conditions.

Machine learning (ML)-based computer vision algorithms enable rapid and fully automated tracking of facial displacements from digital images and videos recorded under typical conditions with consumer-grade cameras. Such facial landmark detection algorithms are usually trained using databases of manually annotated facial photographs. Once trained, these ML algorithms can predict the position of facial landmarks in a new photograph without human intervention, with high accuracy.8–13 ML algorithms for facial landmark localization are increasingly being used to study facial palsy,7,14–17 Parkinson disease,18 stroke,19 amyotrophic lateral sclerosis,20 and dementia.21 Owing to their training using predominantly normal subjects, current ML models for facial landmark recognition may be biased against patients, and demonstrate inadequate accuracy when presented with faces of patients with neuromuscular disease impacting facial movements and expression.21 Herein, we hypothesize that training a ML model for facial landmark localization with facial photographs from a disease-specific clinical database will demonstrate improved tracking accuracy when presented with faces of patients with the condition, in comparison with the model trained using a much larger database of normal subjects.

To evaluate our hypothesis, we introduce the first database of annotated clinical photographs of patients with unilateral facial palsy, employ it to evaluate the bias against patients of a popular facial landmark localization algorithm, and train a ML model for automated facial landmark localization in this population. We further demonstrate the utility of an open-access and user-friendly software, Emotrics, for extracting clinically relevant measurements from photographs and videos in automated manner using the ML model trained herein.

Methods

Automatic facial landmark localization

We employed a popular approach for automatic facial landmark localization in facial photographs known as cascade of regression trees.11,22–25 Specifically, we employed the algorithm proposed by Kazemi and Sullivan,11 which provides accurate facial landmark localization results under multitude of pose, illumination, and expression (PIE) conditions26 and can process medical images in just a few hundred milliseconds without the use of graphical processing units or other specialized hardware.10 Implementation of this algorithm for facial landmark localization is readily available in open source ML libraries such as OpenCV27 and Dlib.28 These implementations were trained using the 300-W data set from the Intelligent Behavior Understanding Group (iBUG).12 This data set is a concatenation of several freely available databases (LFPW,29 HELEN,30 AFW,31 and iBUG32) and comprises 11,500 in-the-wild photographs of >4000 healthy subjects. Model training has been performed by manually annotating a set of facial landmarks in each photograph using the 68-point Carnegie Mellon University multiple pose, illumination, and expression (multi-PIE) database approach.33,34 Manually annotated landmarks outlined the superior border of the brow, the free margin of the upper and lower eyelids, the nasal midline, the nasal base, the mucosal edge and vermillion–cutaneous junction of the upper and lower lips, and the lower two-thirds of the face.

MEEI database

We obtained institutional review board approval to access the Massachusetts Eye and Ear (MEE) Facial Nerve Data Repository, a digital collection of facial photographs, and video clips of patients with unilateral facial palsy at the MEE Facial Nerve Center. The patients had consented to use of the facial photographs for research purposes. A clinical photographer captured high-resolution photographs (1080 × 720 pixels) of a standardized series of facial expressions used to evaluate facial mimetic function.35 Figure 1 exemplifies the type of photographs taken from each patient; all images were taken from the frontal view using a digital camera with optimal lighting.

Fig. 1.

Fig. 1.

Example of the standardized set of facial expressions among facial palsy patients in the annotated database. These facial expressions enable global and zonal evaluation of facial function from still photographs.

In addition to the patient's photographs, we obtained high-resolution photographs of healthy controls from the MEEI Facial Palsy Photo and Video Standard Set14 performing the same standardized facial expressions as the patients.

Photograph annotation

Three trained clinicians independently annotated the 68 multi-PIE landmark points for each photograph in the clinical database using a custom graphical user interface10 (Emotrics Software, Mass Eye and Ear, Boston, MA). Images were uploaded into Emotrics, which provided an initial estimation of the 68 landmark points. Landmark positions for each photograph were verified and manually repositioned as necessary by each marker. Landmark positions of three manual annotators were averaged for each photograph to define ground-truth locations.

Evaluating model bias and training a new model for landmark localization

Marked photographs were clustered by patient and randomly divided into three nonoverlapping groups, all images of the same patients were only in one group. The groups were model training (N = 180 subjects; equating to 90% of the database or 1440 photographs), validation (N = 10 subjects; equating to 5% of the database or 80 photographs, and test (N = 10 subjects; equating to 5% of the database or 80 photographs). Healthy control photographs (N = 10 subjects, equating to 80 photographs) were used for model testing only. Computer operations were performed in the Python programming language (version 3.6.7) on a Lenovo Thinkpad personal computer (T470, Intel Core i7–7600U processor running at 2.8 GHz with 32 GB of RAM) that did not include specialized hardware for training and evaluation of ML models (graphics processing units or tensor processing units).

Evaluation of model bias against patients

Evaluation of model bias against patients was performed using publicly available implementation of the ensemble of regression trees algorithm for facial landmark localization proposed by Kazemi and Sullivan.11 Available algorithm was used to estimate the position of 68 facial landmarks in the test group of our database and the photographs of healthy controls; predicted landmark positions were compared with the ground-truth locations provided by the manual annotation procedure. Errors in landmark localization were quantified using the root-mean-square error (RMSE) between ground-truth and algorithm-predicted landmark positions normalized by the interocular distance (NRMSE).38

Training a specialized model for facial landmark localization in patients

Next, photographs from the training group were employed to retrain the ensemble of regression trees model. Photographs from the validation group were used to determine the model parameters that provided the minimum landmark localization error using cross-validation. Standard model parameters such as number of estimators, tree deep, minimum number of samples per leaf, and number of features were selected using a grid search; a total of 400 permutations were assessed. The parameter set yielding the lowest NRMSE for the validation data set was chosen for model testing and comparison.

The retrained model was compared against the implementation trained with the 300-W database by computing the NRMSE between the ground-truth and model-predicted landmark positions yielded by both models for the test group (patients and controls) of our database.

Statistical analysis

Differences between the groups (i.e., healthy vs. patients) were sought using the Kruskal–Wallis one-way analysis of variance; differences between models (i.e., publicly available vs. retrained using patients' photographs) were sought using the Wilcoxon signed rank test. Statistical significance was considered at p < 0.01.

Results

MEE database

All patients who visited the Facial Nerve Center at MEE between October 2017 and October 2018 and provided written consent to use their photographs for research purposes were considered candidates for the database. Patients who demonstrated gross facial deformities or whose facial features were not clearly visible (due to obstruction by clothing or hair) were excluded. Media from 200 adult and pediatric patients with varying facial palsy severity and etiology, totaling 1600 photographs, were collected, manually annotated, and employed for algorithm training and testing; Table 1 presents relevant demographic and clinical information.

Table 1.

Patients' demographics

Demographics N
Age
 Range (years) 7–89
 Mean ± SD (years) 48.9 ± 17.1
Gender
 Female 135
 Male 65
Race
 White 160
 Hispanic 15
 Asian 14
 Black 11
Etiology
 Infectious
  Bell's palsy 65
  Ramsay Hunt syndrome 14
  Lyme disease 12
  Pregnancy-associated Bell's palsy 7
  Zoster sine herpete 6
  Meningitis 1
 Congenital
  Congenital facial palsy 5
 Otologic
  Cholesteatoma 2
 Neoplastic
  Vestibular schwannoma 21
  Parotid neoplasm 14
  Facial nerve schwannoma 8
  Brainstem neoplasm 4
  CNS metastasis 1
  Basal cell carcinoma 1
  Trigeminal nerve neoplasm 1
 Trauma
  Trauma 6
 Iatrogenic
  Iatrogenic 7
 Neurologic
  Hemifacial spasm 1
  Multiple sclerosis 1
 Vascular
  Brainstem stroke 1
  Venous malformation 1
  Cavernous brainstem hemangioma 1

CNS, central nervous system.

Model bias against patients

Figure 2A shows a box and whiskers plow representing the NRMSE yielded by the standard implementation of the ensemble of regression trees model trained with the 300-W database, when applied to photographs of healthy controls and patients with facial palsy. Our results demonstrated that this implementation of the algorithm yielded significantly worst facial landmark localization accuracy, as quantified by the NRMSE, for patients than for healthy subjects (7.09 ± 2.34 vs. 8.56 ± 2.16, p ≪ 0.01).

Fig. 2.

Fig. 2.

NRMSE of facial landmark localization (A). Model trained with photographs of healthy subjects applied to photographs of healthy subjects and patients. (B) Model trained with photographs of patients suffering from facial palsy applied to photographs of healthy subjects and patients. NRMSE, normalized root-mean-square error.

Specialized model accuracy

Figure 2B shows a box and whiskers plot representing the NRMSE yielded by the ensemble of regression trees model retrained with the MEEI database, when applied to photographs of healthy controls and patients suffering from facial palsy. Our analysis demonstrated that there was no significant difference in the facial landmark localization accuracy between patients and healthy subjects (6.87 ± 2.28 vs. 6.03 ± 2.43, p = 0.03).

There was no significant difference in the landmark localization accuracy provided by the models trained with the 300-W and MEEI databases when applied to healthy subjects (7.09 ± 2.34 vs. 6.87 ± 2.28, p = 0.162). In contrast, the retrained model provided significantly improved accuracy when applied to patients (8.56 ± 2.16 vs. 6.03 ± 2.43, p ≪ 0.01). Figure 3 illustrates the improved accuracy of the model trained with the MEEI database over the model trained with the 300-W database. Figure 3A and C shows obvious localization errors for points defining the facial contour and lips yielded by the 300-W model output; Figure 3B and D shows that fewer and smaller errors are present for the output of the MEEI model.

Fig. 3.

Fig. 3.

Comparison of automatic facial landmark localizations by models trained using the 300-W (A, C) and MEEI (B, D) databases among two patients with facial palsy in the validation group.

Discussion

Objective quantification of disease severity facilitates improved understanding of disease progression, recovery, and treatment response, and patient–clinician and interclinician communication. Facial palsy severity is typically assessed using subjective clinician facial grading systems including the House–Brackmann,36 Sunnybrook,37 and eFACE38 scales. Such approaches are limited by high inter-rater variability, and require considerable training for proper use and interpretation.39,40 Although more objective methods for quantifying facial displacements have been described, no single tool has achieved widespread use.

In this study we demonstrated that ML approaches can provide objective, automatic, and accurate facial measurements in photographs of patients suffering from facial palsy, so that these methods have the potential of disrupting the current clinical practice for diagnosis and assessment of the condition. However, our results demonstrated that publicly available models, trained with databases of healthy subjects, provide significantly worst landmark localization accuracy when applied to photographs of patients. We also demonstrated that by retraining the facial landmark localization model using a small number of photographs from a disease-specific clinical database, it is possible to significantly improve the facial landmark localization accuracy in this patient population. The standardized recording conditions (pose, illumination, expression, and background) of photographs in the clinical database likely explain the observed high accuracy of the model, despite use of a relatively small training data set. These results supported our hypothesis that a novel model for facial landmark localization trained using disease-specific photographs would demonstrate improved tracking accuracy when presented with faces of patients with the condition, in comparison with the model trained using a much larger database of normal subjects.

Finally, we found no significant difference in the landmark localization accuracy yielded by the models trained with the 300-W and MEEI databases when applied to photographs of healthy subjects. This is an unexpected but welcomed result, as it indicates that the new model can be used to track the recovery of a patient across a continuum from highly impaired to normal.

Clinical application: computer-aided assessment in facial palsy

The facial landmark localization model trained with the MEEI database was packaged into the latest version of a previously characterized open-source customized software platform for automatic estimation of clinically relevant facial metrics—Emotrics.10 This tool uses landmark positions provided by a designated 68-point facial landmark localization ML model to estimate facial metrics relevant to the field of facial palsy in high-throughput manner.10,14–17 Figure 4 illustrates various facial metrics computed automatically using Emotrics from clinical photographs of three subjects during full-effort smile. Photographs were taken from the MEEI Facial Palsy Photo and Video Standard Set14 and were not used to train or validate the models presented here. The first subject demonstrates normal facial function and near-symmetric facial metrics. The second subject developed left-sided flaccid facial paralysis following Bell's palsy. The third subject developed left-sided postparalytic facial palsy and synkinesis following Bell's palsy. Marked differences in oral commissure position and palpebral fissure heights between sides and subjects are readily quantified.

Fig. 4.

Fig. 4.

Examples of automated facial metrics calculated using Emotrics software. Measurements were computed using estimated landmark position provided by the ML model trained using photographs of facial palsy patients. Red background indicates facial measurements that are markedly different among facial sides. Subject 1 is normal, Subject 2 suffers from flaccid facial palsy, and Subject 3 suffers from postparalytic facial palsy. Presented measures include eyebrow height (vertical distance from the midpupillary point to the superior border of the brow), palpebral fissure height (vertical distance between central portions of upper and lower lid margins), and commissure excursion (distance from the facial midline at the lower lip vermilion junction to the oral commissure). ML, machine learning.

Limitations

There are several limitations with the facial landmark localization model described herein. The database comprises patient photographs from a single center and reflects its demographics. The database includes more female (N = 135) than male (N = 65) patients, and its racial demographic is mostly white (N = 160), with small representation of minorities, including Hispanic (N = 15), Asian (N = 14), and black (N = 11). In addition, the database comprises more middle-aged adults (age group = (24, 64] years, N = 142) than younger adults (age group = (18, 25] years, N = 12), older adults (age group = 64+ years, N = 41), and children (age group = (0, 18] years, N = 5). Other sources of model bias such as the presence of facial hair were not assessed. Nonsymmetric distribution of patient demographics might lead to prediction error bias; for example, the model might demonstrate higher performance among adult middle-aged white women as they comprised the largest cohort of the training data set. The model is further limited in that recording conditions (PIE) were specific to our clinical center. Although patient pose (requiring frontal view of face with neutral roll, tilt, and yaw) and expression may be readily standardized across clinical centers, illumination conditions are more challenging to standardize and their impact on model accuracy has yet to be assessed. Further work will seek to expand the training data set to include patient photographs from multiple clinical centers to improve model accuracy across a wider range of patient demographics and disease severities. The facial landmark localization model was applied only to patient photographs of eight fixed facial expressions. Future study will seek to assess the performance of this model for dynamic tracking of facial landmarks during expression from videos of patients with unilateral facial palsy.

Availability

Emotrics and the two ML models for facial landmark localization discussed here are freely available online on GitHub (www.github.com/dguari1) and the Sir Charles Bell Society website (www.sircharlesbell.com). Emotrics software is open-access and open-source; the Python-based code can be modified as desired. Request from research institutes to share the data will be reviewed on case-by-case basis by the MEEI Institutional Review Board. Owing to patient privacy concerns, the annotated database cannot be shared online.

Conclusions

We introduced the first manually annotated database of standardized PIE photographs among patients with unilateral facial palsy. Using this data set, we demonstrated that a ML model for automatic facial landmark localization in this patient population outperforms a model trained using a much larger data set of healthy subjects. We demonstrated the clinical utility of this approach in the quantification of facial palsy disease severity from database photographs and characterized an open-access software tool that facilitates rapid calculation of relevant facial metrics in this patient population.

Author Disclosure Statement

No competing financial interests exist.

Funding Information

D.L.G. was supported by the Fonds de Recherche du Québec – Nature et technologies, grant number 208637.

References

  • 1. Hadlock TA, Urban LS. Toward a universal, automated facial measurement tool in facial reanimation. Arch Facial Plast Surg. 2012;14(4):277–282 [DOI] [PubMed] [Google Scholar]
  • 2. Bleicher JN, Hamiel S, Gengler JS, Antimarino J. A survey of facial paralysis: etiology and incidence. Ear Nose Throat J. 1996;75(6):355–358 [PubMed] [Google Scholar]
  • 3. Bray D, Henstrom DK, Cheney ML, Hadlock TA. Assessing outcomes in facial reanimation: evaluation and validation of the SMILE system for measuring lip excursion during smiling. Arch Facial Plast Surg. 2010;12(5):352–354 [DOI] [PubMed] [Google Scholar]
  • 4. Coulson SE, Croxson GR, Gilleard WL. Three-dimensional quantification of “still” points during normal facial movement. Ann Otol Rhinol Laryngol. 1999;108(3):265–268 [DOI] [PubMed] [Google Scholar]
  • 5. Frey M, Michaelidou M, Tzou CH, et al. Three-dimensional video analysis of the paralyzed face reanimated by cross-face nerve grafting and free gracilis muscle transplantation: quantification of the functional outcome. Plast Reconstr Surg. 2008;122(6):1709–1722 [DOI] [PubMed] [Google Scholar]
  • 6. Gerós A, Horta R, Aguiar P. Facegram—objective quantitative analysis in facial reconstructive surgery. J Biomed Inform. 2016;61:1–9 [DOI] [PubMed] [Google Scholar]
  • 7. Guo Z, Dan G, Xiang J, et al. An unobtrusive computerized assessment framework for unilateral peripheral facial paralysis. IEEE J Biomed Health Inform. 2018;22(3):835–841 [DOI] [PubMed] [Google Scholar]
  • 8. Cootes TF, Edwards GJ, Taylor CJ. Active appearance models. IEEE Trans Pattern Anal Mach Intell. 2001;23(6):681–685 [Google Scholar]
  • 9. Fan H, Zhou E. Approaching human level facial landmark localization by deep learning. Image Vision Comput. 2016;47:27–35 [Google Scholar]
  • 10. Guarin DL, Dusseldorp JR, Hadlock A., Jowett N.. Automated facial measurements in facial paralysis: a machine learning approach. JAMA Facial Plast Surg. 2017;20(4):335–337 [DOI] [PubMed] [Google Scholar]
  • 11. Kazemi V, Sullivan J. One millisecond face alignment with an ensemble of regression trees. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. New Jersey: IEEE Publishing; 2014
  • 12. Sagonas C, Tzimiropoulos G, Zafeiriou S, Pantic M. 300 faces in-the-wild challenge: the first facial landmark localization challenge. In Proceedings of the IEEE International Conference on Computer Vision Workshops. New Jersey: IEEE Publishing; 2013
  • 13. Zhang Z, Luo P, Loy CC, Tang X. Facial landmark detection by deep multi-task learning. In European Conference on Computer Vision. New York: Springer; 2014
  • 14. Greene JJ, Tavares J, Guarin DL, et al. The spectrum of facial palsy: the MEEI facial palsy photo & video standard set [published online April 25, 2019]. Laryngoscope. doi: 10.1002/lary.27986 [DOI] [PubMed] [Google Scholar]
  • 15. Greene JJ, Tavares J, Guarin DL, Jowett N, Hadlock T. Surgical refinement following free gracilis transfer for smile reanimation. Ann Plast Surg. 2018;81(3):329–334 [DOI] [PubMed] [Google Scholar]
  • 16. Greene JJ, Tavares J, Mohan S, Jowett N, Hadlock T. Long-term outcomes of free gracilis muscle transfer for smile reanimation in children. J Pediatr. 2018;202:279–284. e2. [DOI] [PubMed] [Google Scholar]
  • 17. Jowett N, Hadlock TA. Facial palsy: diagnostic and therapeutic management. Otolaryngol Clin North Am. 2018;51(6):xvii–xviii [DOI] [PubMed] [Google Scholar]
  • 18. Li MH, Mestre TA, Fox SH, Taati B. Vision-based assessment of parkinsonism and levodopa-induced dyskinesia with pose estimation. J Neuroeng Rehabil. 2018;15(1):97. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Lanz C, Olgay BS, Denzler J, Gross H-M Facial landmark localization and feature extraction for therapeutic face exercise classification. In International Conference on Computer Vision, Imaging and Computer Graphics. New York: Springer; 2013
  • 20. Bandini A, Green JR, Taati B, Orlandi S, Zinman L, Yunusova Y. Automatic detection of amyotrophic lateral sclerosis (ALS) from video-based analysis of facial movements: speech and non-speech tasks. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). New Jersey: IEEE Publishing; 2018
  • 21. Taati B, Zhao S, Ashraf AB, et al. Algorithmic bias in clinical populations—evaluating and improving facial analysis technology in older adults with dementia. IEEE Access. 2019;7:25527–25534 [Google Scholar]
  • 22. Dollár P, Welinder P, Perona P. Cascaded pose regression. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New Jersey: IEEE Publishing; 2010
  • 23. Johnston B, de Chazal P. A review of image-based automatic facial landmark identification techniques. EURASIP J Image Video Process. 2018;2018(1):86 [Google Scholar]
  • 24. Sun Y, Wang X, Tang X. Deep convolutional network cascade for facial point detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. New Jersey: IEEE Publishing; 2013
  • 25. Tzimiropoulos G. Project-out cascaded regression with an application to face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. New Jersey: IEEE Publishing; 2015
  • 26. Bulat A, Tzimiropoulos G. How far are we from solving the 2d & 3d face alignment problem?(and a dataset of 230,000 3d facial landmarks). In Proceedings of the IEEE International Conference on Computer Vision. New Jersey: IEEE Publishing; 2017
  • 27. Baksheev A, Erumihov V, Kornyakov K, Pulli K. Realtime computer vision with Opencv. Queue. 2012;10(4):40 pages. [Google Scholar]
  • 28. King DE. Dlib-ml: a machine learning toolkit. J Mach Learn Res. 2009;10(Jul):1755–1758 [Google Scholar]
  • 29. Belhumeur PN, Jacobs DW, Kriegman DJ, Kumar N. Localizing parts of faces using a consensus of exemplars. IEEE Trans Pattern Anal Mach Intell. 2013;35(12):2930–2940 [DOI] [PubMed] [Google Scholar]
  • 30. Le V, Brandt J, Lin Z, Bourdev L, Huang TS Interactive facial feature localization. In European Conference on Computer Vision. New York: Springer; 2012
  • 31. Ramanan D, Zhu X. Face detection, pose estimation, and landmark localization in the wild. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Citeseer. New Jersey: IEEE Publishing; 2012
  • 32. Sagonas C, Tzimiropoulos G, Zafeiriou S, Pantic M. A semi-automatic methodology for facial landmark annotation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. New Jersey: IEEE Publishing; 2013
  • 33. Gross R, Matthews I, Cohn J, Kanade T, Baker S. Multi-pie. Image Vision Comput. 2010;28(5):807–813 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Sim T, Baker S, Bsat M. The CMU pose, illumination, and expression (PIE) database. In Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition. New Jersey: IEEE Publishing; 2002
  • 35. Santosa KB, Fattah A, Gavilán J, Hadlock TA, Snyder-Warwick AK. Photographic standards for patients with facial palsy and recommendations by members of the Sir Charles Bell Society. JAMA Facial Plast Surg. 2017;19(4):275–281 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. House W. Facial nerve grading system. Otolaryngol Head Neck Surg. 1985;93:184–193 [DOI] [PubMed] [Google Scholar]
  • 37. Ross BG, Fradet G, Nedzelski JM. Development of a sensitive clinical facial grading system. Otolaryngol Head Neck Surg. 1996;114(3):380–386 [DOI] [PubMed] [Google Scholar]
  • 38. Banks CA, Bhama PK, Park J, Hadlock CR, Hadlock TA. Clinician-graded electronic facial paralysis assessment: the eFACE. Plast Reconstructive Surg. 2015;136(2):223e–230e [DOI] [PubMed] [Google Scholar]
  • 39. Banks CA, Jowett N, Azizzadeh B, et al. Worldwide testing of the eFACE facial nerve clinician-graded scale. Plast Reconstructive Surg. 2017;139(2):491e–498e [DOI] [PubMed] [Google Scholar]
  • 40. Reitzen SD, Babb JS, Lalwani AK. Significance and reliability of the House-Brackmann grading system for regional facial nerve function. Otolaryngol Head Neck Surg. 2009;140(2):154–158 [DOI] [PubMed] [Google Scholar]

Articles from Facial Plastic Surgery & Aesthetic Medicine are provided here courtesy of Mary Ann Liebert, Inc.

RESOURCES