Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Nov 12.
Published in final edited form as: Proc SPIE Int Soc Opt Eng. 2012 Feb 14;8314:83143O. doi: 10.1117/12.912537

Automated Detection of Pain from Facial Expressions: A Rule-Based Approach Using AAM

Zhanli Chen 1, Rashid Ansari 1, Diana J Wilkie 2
PMCID: PMC6850895  NIHMSID: NIHMS1057485  PMID: 31719714

Abstract

In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

Keywords: Coefficient Partitioning AAM, Automated Pain Detection, Rule-Based Recognition, FACS

1. INTRODUCTION

Research has shown that facial expressions can provide reliable measures of pain across human lifespan [1] and there is also good consistency of facial expressions corresponding to painful stimuli. The Facial Action Coding System is widely used in pain analysis, because it provides an objective assessment to score and recognize Action Units (AUs), which represents the muscular activity that produces momentary changes in facial appearance [2]. Several studies using FACS have identified a collection of core Action Units, which are specific to pain and that occur singly or in combination, such as brow lowering (AU4), cheek raise and eyelid tightening (AU6 and 7), nose wrinkle and upper lip raise (AU9 and 10) and eye closing (AU43). These results are also confirmed in the study of facial expressions of pain in cancer patients [3]. The facial expression coding using FACS is generally performed offline by trained experts on the video of a patient. While the FACS system enables assessment of pain at each time step (i.e. each video frame), the Action Unit detection via manual observations is very time consuming, which makes its real-time clinical use by human observers prohibitive [4]. Therefore, the development of an automated AU detection for pain would be a significant and efficient innovation for clinical practice.

1.1. Past work on automated detection of facial expression of pain

Existing research has reported progress on performing automated expression recognition. The work has largely focused on general expressions involving six basic emotions [7,8]. These methods usually employ 2D spatiotemporal features in the recognition task. The methods involve the use of geometric features that refer to the shapes of the facial components (eyes, mouth, etc.) and appearance features that refer to textural image information such as wrinkles, bulges, and furrows. There are many publicly accessible general facial expression databases as tabulated by Zeng et al. [7] and these contain both spontaneous or posed expressions [6,15,16]. Methods of general expression recognition methods rely on the use of classifiers trained with large sets of images [17,18]. Pantic et al. [4] have proposed a rule-based approach for AU detection using temporal dynamics of the feature points. They adopt particle filtering to track 15 feature points in the profile view of human faces. Tian et al. [6] combined facial component shapes with transient features like crow-feet wrinkles and nasal-labial furrows for AU detection using neural networks. The data used in the facial expression recognition of 6 basic emotions often consists of deliberate and exaggerated facial displays of emotion. Research in psychology [9,10] has suggested that posed facial expressions are different from spontaneous ones in terms of muscle utilization and dynamics. Therefore, automated facial analysis methods trained on deliberately posed images may have poor performance when they are applied to spontaneous expression recognition. Several recent studies [7] on automated recognition have investigated spontaneous facial expression data. Tong and Ji [11] use a dynamic Bayesian Network (DBN) to model the relationships among different AUs, which enables probability-based measurement of existence of New AUs. The method is robust to changing illumination, pose alteration, and occlusion, and is potentially useful for detecting spontaneous facial expressions. However, further study is needed to develop a more complete network to explore temporal relationship of AUs. However, databases used in these studies and those listed by Zeng et al [7] do not contain videos of spontaneous pain expression. Only recently has some effort been directed at detecting pain-related facial expression [14,17,19]. Lucey et al. [12] used an Active Appearance Model (AAM) based method to extract features from video achieves of patients with shoulder pain. A Support Vector Machine (SVM) is trained by similarity normalized shape model (S-PTS), canonical-normalized appearance model (C-APP) and their combinations respectively, which is used for Action Unit classifications. The authors have observed that the best AU performance is obtained by fusing all three features together. Very recently in 2011 their database, the UNBC-McMaster Shoulder Pain Archive [14], containing facial expressions of shoulder pain induced by movements became publicly available.

Two key tasks required in automated AU detection are: feature extraction and feature classification/recognition. The feature extraction task is conveniently performed using the Active Appearance Model (AAM), which is a parametric model of shape and texture and provides an efficient approach to aligning a predefined template to an unseen image with objects of interest. AAM has been used in recent research [3,4] for facial expression modeling and video tracking purposes. The AU recognition task is generally solved using a classification method, such as SVM or a neural network. The classification-based methods focus on high recognition rate for a large number of AUs by relying on a well-established database with adequate AU diversity. However, we will use a database taken in a natural setting (patient homes) in which pain expressions occur infrequently. In our database patients display neutral expression in most video frames and spontaneous pain only occurred in a short period, often accompanied with body movement or rigid head motion. Insufficient positive examples (frames containing AUs of interest) cause difficulty in training a classifier. On the other hand, the type of AUs or AU combinations associated with pain is limited [3]; hence recognition accuracy of AUs of interest has higher priority than the total number of AUs that can be recognized. Therefore, a rule-based recognition method is preferable in this situation.

1.2. Database of cancer patient videos used in this study

In this study we use the database created by D. Wilkie [3], containing videos of 43 patients suffering from lung cancer. The patients were required to repeat a standard set of randomly ordered behaviors including sit, stand up, walk, and recline, in a 10-minute video with a camera focusing on the face area to record their facial expressions. Each video was equally separated into 30 segments with 20 seconds per segment. The segments were reviewed and scored by three trained human FACS coders independently. An AU was scored in one video segment only if at least two coders agree with each other. The definition of pain–related Action Units scored in this database appears in Table 1.

Table 1.

Pain Related Action Units (Source [2])

FACS Ation Units: Description Muscular Basis
4 eye brow lower depressor glabellae, depressor supercilii; corrugator supercilli
6 cheek raiser orbicularis oculi; pars obitalis
7 eye lid tightener obicularis oculi: pars palebralis
9 nose wrinkler levator labii superioris alaeque nasi
10 upper lip raiser levator labii superioris; caput infraorbitalis
20 lip stretcher risorius
26 jaw drop masetter; temporal and internal pterygoid relaxed
27 mouth stretch pterygoids, digastric
43 eyes closed relaxation of levator palpebrae superioris

2. AUTOMATED FEATURE TRACKING

The proposed automated action unit recognition consists of three major steps: (i) person specific AAM training, (ii) video tracking and feature extraction, and (iii) Action Unit recognition and report generation. A schematic of this framework is shown in Figure 1.

Figure 1.

Figure 1.

Automated AU Recognition Method. Copyright 2012 by authors, reproduced by permission.

2.1. Training image labeling

The shape S of the AAM is described by a 2D triangular mesh, where the vertices are assigned along feature cues on the face. However, manually marking 66 vertices on the feature profiles consistently is a tedious process, especially on low resolution images from video frames that lack sharp details on the boundary of features. The mismatch among the locations of the same vertices in different images will introduce unnecessary noise to the eigen-shape space. In our work, some key vertices are first identified for the face image, and the movement of remaining vertices is obtained by their relationship to the key vertices. The feature point configuration is shown in Figure 2. A more convenient way of labeling is to use a semi-automated algorithm that requires identifying a few feature points on the closed contours detected in the facial images, and the remaining vertices can be aligned automatically along the selected contour.

Figure 2.

Figure 2.

Feature Points Allocation.

Copyright 2012 by authors, reproduced by permission.

2.2. Partitioned-coefficient Active Appearance Model

The Active Appearance Model (AAM), which is a parametric model of shape and texture, serves as the basis of an efficient method to align a predefined template to an unseen image with objects of interest. The shape model is spanned by n eigenshapes Si plus the mean shape S0.

S=S0+i=1npiSi (1)

The appearance model is spanned by m eigen appearance Aj plus the mean eigen-texture A0,

A=A0+j=1mλjAj (2)

The main goal of AAM is to find parameters that minimize the discrepancy between the observed image and synthesized image. We follow the project-out inverse compositional fitting algorithm in [5] for the AAM implementation to train a person-specific AAM for each patient. The project-out method makes the iteration update orthogonal to the eigen-texture space, thus requiring only the shape parameters to be updated during the fitting process. In the original model, each coefficient is used to control a specific motion (represented by one Eigen shape) of all vertices simultaneously in the shape model. The global coefficient {pi} may not adequately represent different local motion at the same time. A tradeoff between local shape fitting and global error reduction occurs when the algorithm tries to fit multiple feature components on the face, e.g. eyes mouth and brows. Such a tradeoff sometimes leads the algorithm to get stuck in a local minimum. In order to increase the flexibility for the deformation of the shape model, we use a Partitioned Coefficient AAM where the shape model is decoupled into five dominant features, namely jaw line, brows, eyes, nose and mouth, as represented by equation 3,

Sp=S0+i=15j=1mipijSij (3)

where Sij is the j th Eigen vector of the i th feature, and pij is the coefficient corresponding to Sij. We note that eigenvectors belonging to the same feature are orthogonal due to Principle Component Analysis (PCA), and eigenvectors corresponding to different features are also orthogonal. The shape model is decoupled and the deformation of each feature in the shape profile is controlled by a specific coefficient. It is easy to show that the original fitting algorithm is still applicable to the sparse matrix form of the shape model, while a slight increase of computational complicity is expected. However, one potential problem arises in the partitioned coefficient model: as more flexibility is given to individual features, the feature profiles may intersect with each other during the fitting process, which violates the Delaunay Rules used in AAM training process. A boundary check mechanism is thus introduced to avoid this problem. The details of the partitioned coefficient methods are described in [20].

3. RULE BASED ACTION UNIT RECOGNITION

The rule-based method relies on the feature points extracted from the shape vertices of AAM to identify the facial action cues, which have a natural correspondence to facial muscular movement. The work in [13] further included the temporal information of the feature points, which enhanced the robustness in AU recognition, especially in AU combination detection from a video stream. However, an efficient design of recognition rules should be carefully performed. First, rules must be mutually exclusive, which requires avoiding ambiguity between any two rules. Due to the limited number of feature points employed, some AUs (e.g., AU 26 and 27) can only be discriminated by using a threshold if only shape information is used. Second, permanent features like wrinkles and individual differences will cause appearance and intensity variations in the same AU. AU combinations may also change the appearance from a superposition of single AUs due to their non-additivity. In fact, there are very detailed descriptions to elaborate the subtle changes in AU recognition in the FACS manual. Therefore, further study is needed to develop rules that properly synthesize and interpret the AU information as defined by the FACS.

The rules developed here are based on measurements of the Euclidian distances among feature points. Ideally the feature points should be selected such that they are the best candidates to represent the muscular movement on the face as per Table 1. However, in practice, feature points are often selected on the boundary where significant texture change occurs, e.g., eyelid, mouth profile, which are helpful in labeling the training image consistently. Whereas most existing research has focused on expanding the collection of AUs to be recognized, our goal on detecting spontaneous pain from patients with lung cancer is limited to a small set of AUs and their combinations. Our effort is directed at carefully developing rules that mimic human experts’ decision. Among the AUs listed in Table 1, AUs 4, 6, 7, 43 are displayed on upper face while AUs 9, 10, 20, 26, 27 appear in the lower face. The method we report here has been developed for an upright face. Segments in which the patient is lying down are excluded in this study. A sample score sheet is given in Figure 3. We now examine four aspects of the rule-based recognition method: 1) distance parameter extraction, 2) rule-based recognition, 3) decision trees for AU combination, and 4) performance criterion based on score sheet marked by experts.

Figure 3.

Figure 3.

AU Combination Detection. Copyright 2012 by authors, reproduced by permission.

3.1. Distance parameter extraction

The FACS manual [2] classifies the Action Units into four action categories, namely horizontal, up/down (vertical), oblique, and orbital. While horizontal and vertical options can be calculated directly from coordinates of feature points, oblique and orbital actions may be extracted by evaluating angular and curvature information among certain feature points. The AUs listed in the score sheet are sufficiently distinguishable by horizontal and vertical actions. Therefore we mainly consider horizontal and vertical distance change between feature points. We adopt two intermediate parameters using an approach similar to [13] to evaluate the Euclidian distance of the extracted feature points: inc/dec (pp’)=pp’tpp’0 describes the change in distance (increase/decrease) between point p and p’ where the subscript t refer to the tth frame and 0 refers to the neutral frame; and up/down(p)=pptrefpp0ref describes the vertical motion of a single feature point p relative to a reference point in the same frame. These parameters are calculated from the AAM shape model for every frame, and the effect of rigid motion should be removed by aligning the shapes to the mean shape prior to the calculation. Summarized in Table 2 are the rules used to identify single and combinations of pain-related AUs as presented on the score sheet that the human FACS coders used.

Table 2.

Action Unit Decision Rules Key: / = or; + = and

Action Unit Rules Involved (The unit of distance is pixel) Rule Combination Code Max Score Min Score
6/7 (up(G) or dec(FG)) and not(dec(DD1h)) 110 2 1
20 inc(IJh) and not(Up/Down(I) or Up/Down(J)) 100 2 1
27 inc(KL) > T1 and inc(IJh) and not(dec(DD1h)) 110 1 1
4+6/7/43 (dec(DD1h) or down(D)) and (up(G) or dec(FG)) 1111 4 2
4+9/10 (dec(DD1h) or down(D) and (up(N) or up(H)) 1111 4 2
4+26 (dec(DD1h) or down(D)) and inc(KL) < T1 and not(inc(IJh)) 1111 2 1
4+27 (dec(DD1h) or down(D)) and inc(KL) > T1 and inc(IJh) 1110 2 1
9/10+26 (up(N) or up(H)) and inc(KL) < T1 and not(inc(IJh)) 1110 2 1
9/10+27 (up(N) or up(H)) and inc(KL) > T1 and inc(IJh) 1111 2 1
None None of the AU detectors triggered 1 1

3.2. Rule-based recognition

A rule for the presence of an AU is triggered if a complete onset, apex, offset period is observed. An Action Unit is caused by multiple muscle group movement and every such movement can be represented by the change in distance between feature points. Therefore, an AU maybe identified by checking if all the key rules are triggered and its intensity may also be scored by the number of rules are triggered. We consider three kinds of relationship between individual rules, namely ‘And’, ‘Or’, and ‘Exclusive’, where ‘i And j’ means rule i and rule j must overlap for 50% of the duration of the one with the shorter period; ‘ i Or j’ means either rule i or rule j must hold; ‘Exclusive’ means rule i and rule j cannot overlap. The length of rule combination codes stands for the number of rules involved for an AU decision and the digit ‘1’ indicates that the rule it represents overlaps with first rule whereas digit ‘0’ indicates the opposite case. When several AUs occur simultaneously, the appearance of each AU may differ significantly from their individual appearance, which is referred to as non-additivity. In order to score certain AUs in a non-additive combination, FACS Manual highlights some principle rules that must be met in order to score some particular AUs in an AU combination. For instance, to score AU 4 together with 6 or 7, the brow must pull together, as 6 or 7 may also cause the brow to be lower. This rule allows partial score generated for AUs in Table 2, if the principle rules are met for AU combinations. If all the rules are triggered in addition to the principle rules, the full score listed in Table 2 will be reached, which gives greater confidence in the recognized AU combination.

3.3. Decision trees for AU combination

The sequential relationship for detecting an AU combination is investigated by introducing decision trees. Although some AU combinations in the score sheet are mutually exclusive, they could be scored in different periods within the entire 20 seconds segment. Hence we cannot rely on a single decision tree to identify all the possible AUs. On the other hand, some AUs (e.g., AU 6/7 and 9/10) that are caused by different muscular group yet display similar appearance change are not required to be discriminated for pain recognition purpose in the score sheet, which simplifies the structure of the decision trees to some extent. By observing the score sheet, AU 4 and AU 9/10 are involved in most AU combinations, but neither of them is scored alone in Table 2. Therefore, the decision trees may start from these 2 nodes if there is evidence of their presence in the AU combinations. We are able to use five simple decision trees to cover all the AUs in Table 2 as shown in Figure 3.

3.4. Performance criterion based on score sheet marked by experts

In our dataset, patients displayed very few pain-related expressions or action units of interest during most of the 20s-length video segments and the experts did not often reach agreement due to the low intensity of AUs or interference from other AUs. Therefore, it is not appropriate to evaluate the performance of our method based on recognition rate (due to a very limited number of positive samples) or false alarm rate (due to difficulty in identifying the canonical truth). As a result, we adopt an alternative way for the performance assessment by evaluating the agreement between the score sheet generated by the coding experts and the recognition result of the algorithm. Visual tracking results of feature points will also be provided as a reference to coding experts, as they may have overlooked some features while concentrating on other features during the frame-by-frame scoring process that allowed rewinding and viewing the frames as many times as necessary to derive a score. A partial score sheet used by the FACS experts is shown in Figure 4, on which the expert marked the observed action units within every time slot. There are 13 of the 30 slots presented in Figure 4.

Figure 4.

Figure 4.

A sample score sheet

4. EXPERIMENTAL Results

4.1. Experimental Results

In order to test the effectiveness of the system, we examined videos of four patients who displayed Action Units in Table 2. Snapshots of patient P18, P27 and P44 are shown in Figure 5. Figure 6 shows the synthesized facial image of Patient P27 generated by AAM. Figure 7(a) shows the tracking result (Patient P27, Slot 1) of feature points IJ in both horizontal and vertical direction, which is used to identify AU 20. Figure 7(b) and (c) show the tracking result (P18, slot 18) for the inner corner of eye brow (DD1), nose wrinkling(MH), and upper lip rising(KN), which is used to identify the AU combination 4+9/10. The complete result is summarized in Table 3. In all cases, we compared AUs recognized by the computer and found full agreement with those scored by at least two coding experts for the upright faces.

Figure 5.

Figure 5.

Snapshot of Patients: P44 P27 and P18 Copyright 2012 by authors, reproduced by permission.

Figure 6.

Figure 6.

AAM modeling result (Patient P27)

Copyright 2012 by authors, reproduced by permission.

Figure 7.

Figure 7.

Tracking result of feature points (a) AU 20, (b) and (c): AU 4+9/10.

Copyright 2012 by authors, reproduced by permission.

Table 3.

AU recognition test on four patients

Patient / Video Slot Scored AU(Experts) Recognized AU(Computer)
P11 slot 3 4+9/10 4+9/10
P18 slot 18 4+9/10 4+9/10
P27 slot 1 20 20
P44 slot 2 4+6/7/43 4+6/7

4.2. Future work on comparison and performance assessment

As noted before, we found full agreement in AU detection between the automated method and results of scoring by at least two coding experts for the upright faces. The only method and related database that serve as a meaningful reference for pain detection are those of Lucey et al. [4,12,14]. However, their method cannot be applied directly to our database because (i) their baseline AAM/SVM system is person-specific, which requires labeling all frames (with specific AUs for training purposes) but this is not available in our database and is infeasible for the long videos in our database (10 min each), and (ii) shoulder pain is usually acute and triggered by shoulder movement, which provides a clear indication of onset of pain expression to the coding expert, while in our database the pain is subtle and highly infrequent, making it difficult to predict and unambiguously label each frame. We will evaluate our method using the shoulder pain archive in our future work. The AAM labeling is known in their dataset and we can use it as input to our AAM fitting algorithm. Since our tracking and recognition step is decoupled, we will use the AAM labeling as prior or a master to test our rule based method. We will compare our rule-based method with the S-pts model used by Lucey et al.[14], and the rule-based method combined with appearance detectors will be compared with the S-pts + C-app method[14]. The ROC curve will also be employed to evaluate the performance.

5. CONCLUSION AND DISCUSSION

Automated pain analysis has great potential for benefit in patient care of people unable to communicate their pain to clinicians or lay caregivers. Combining AAM and rule-based recognition has been shown as a practical solution to this problem, which produces a score report comparable to that generated by human experts. However, our home-based patient database was created using analog video technology in late 80s, in which varying illumination, rigid motion, and poor quality of the video are the major challenges that prevent us from applying a complete evaluation of the entire database. In future work, a new database will be created with high-resolution videos and optimized camera configurations to further facilitate the feature extraction. In addition, the shape model in the AAM shall be extended to 3D to capture the out-of-plane motion of feature points, which provides a more reliable detection of Action Units.

ACKNOWLEDGEMENT

This research and publication was made possible by Grant Number P30 NR010680-S1 from the National Institutes of Health, National Institute of Nursing Research. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institute of Nursing Research.

REFERENCES

  • [1].Craig K, Prkachin K, Grunau R, [The facial expression of pain. Handbook of pain assessment], New York: Guilford; (2001). [Google Scholar]
  • [2].Ekman P, Friesen W, and Hager J, [Facial Action Coding System: The Manual], Research Nexus division of Network Research Information. Salt Lake City, UT, USA: (2002). [Google Scholar]
  • [3].Wilkie DJ, “Facial Expressions of Pain in Lung Cancer,” Analgesia 1:2, 91–99 (1995). [Google Scholar]
  • [4].Lucey P, et al. , “Automatically Detecting Action Units from Faces of Pain: Comparing Shape and Appearance Features”, CVPR Workshops (2009). [Google Scholar]
  • [5].Matthews I, and Baker S, “Active appearance models revisited,” International Journal of Computer Vision, 60(2), 135–164 (2004). [Google Scholar]
  • [6].Tian Y, Cohn JF, “Kanade T Facial expression analysis”, In: Li SZ, Jain AK, editors. [Handbook of Face Recognition], New York: Springer; 247–76(2005). [Google Scholar]
  • [7].Zeng Z et al. , “A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol 31(1), 39–58 (2009). [DOI] [PubMed] [Google Scholar]
  • [8].Fasel B, and Luttin J, “Automatic Facial Expression Analysis: Survey,” Pattern Recognition, Vol. 36(1), 259–275(2003). [Google Scholar]
  • [9].Cohn JF, “Foundations of Human Computing: Facial Expression and Emotion,” Proc. ICMI 06, 233–238(2006). [Google Scholar]
  • [10].Pantic M, and Bartlett MS (2007), “Machine Analysis of Facial Expressions,” Delac K and Grgic M, eds.,[Face Recognition], ISBN: 978-3-902613-03-5. [Google Scholar]
  • [11].Tong Y, Liao W, Ji Q, “Facial Action Unit Recognition by Exploiting Their Dynamic and Semantic Relationships,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 29(10), 1683–1699(2007). [DOI] [PubMed] [Google Scholar]
  • [12].Lucey P et al. , “Automatically Detecting Pain in Video Through Facial Action Unit,” IEEE Trans. on Systems, Man, and Cybernetics, Vol. 41(3), 664–674 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Pantic M, Patras I, “Dynamics of Facial Expression: Recognition of Facial Actions and Their Temporal Segments From Face Profile Image Sequences.” IEEE Trans. on Systems, Man, and Cypernetics, Vol. 36(2), 433–449 (2006). [DOI] [PubMed] [Google Scholar]
  • [14].Lucey P, Cohn JF, Prkachin KM, Solomon P, & Matthews, “Painful data: The UNBC-McMaster Shoulder Pain Expression Archive Database.” IEEE Int. Conf. FG2011, 57–64 (2011). [Google Scholar]
  • [15].Pantic M, Valstar MF, Rademaker R, and Maat L, “Web-Based Database for Facial Expression Analysis,” Proc. Multimedia ‘ 05, 317–321(2005). [Google Scholar]
  • [16].Kanade T, Cohn J, and Tian Y, “Comprehensive Database for Facial Expression Analysis,” Proc. IEEE Int’l Conf. Face and Gesture Recognition (AFGR ‘00), 46–53 (2000). [Google Scholar]
  • [17].Lucey P, Cohn J, Lucey S, Sridharan S, Prkachin KM, “Automatically detecting action units from faces of pain: Comparing shape and appearance features,” IEEE Proc. CVPR 2009, 12–8(2009). [Google Scholar]
  • [18].Littlewort GC., Bartlett MC, and Lee K, “Faces of Pain: Automated Measurement of Spontaneous Facial Expressions of Genuine and Posed Pain,” Proc. Ninth ACM Int. Conf. Multimodal ICMI ‘ 07, 15–21 (2007). [Google Scholar]
  • [19].Ashraf AB., Lucey S, Cohn JF, et al. “The painful face: Pain expression recognition using active appearance models.” ACM Int.l Conf. on Multimodal Interfaces; 9–14(2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Chen Z, Ansari R, Wilkie DJ., “Partitioned-coefficient AAM for detection of pain from facial expression using FACS”, to be submitted. [Google Scholar]

RESOURCES