Skip to main content
Indian Journal of Psychological Medicine logoLink to Indian Journal of Psychological Medicine
. 2022 Aug 5;45(5):471–475. doi: 10.1177/02537176221111578

Development and Validation of the AIIMS Facial Toolbox for Emotion Recognition

Rohit Verma 1,, Navkiran Kalsi 1, Neha Priya Shrivastava 1, Anita Sheerha 1
PMCID: PMC10523516  PMID: 37772150

Abstract

Background:

Emotional facial expression database, used in emotion regulation studies, is a special set of pictures with high social and biological relevance. We present the AIIMS Facial Toolbox for Emotion Recognition (AFTER) database. It consists of pictures of 15 adult professional artists displaying seven facial expressions—neutral, happiness, anger, sadness, disgust, fear, and surprise.

Methods:

This cross-sectional study enrolled 15 volunteer students from a professional drama college in India (six males and nine females; mean age = 26.2 ± 1.93 years). They were instructed to pose with different emotional expressions in high and low intensity. A total of 240 pictures were captured in a brightly lit room against a common, light background. Each picture was validated independently by 19 mental health professionals and two professional teachers of dramatic art. Apart from recognition of emotional quality, ratings were done for each emotion on a 5-point Likert scale with respect to three dimensions—intensity, clarity, and genuineness. Results are discussed in terms of mean scores on all four parameters.

Results:

The percentage hit rate for all the emotions, after exclusion of contempt, was 84.3%, with the mean kappa for emotional expression being 0.68. Mean scores on intensity, clarity, and genuineness of the emotions depicted in the pictures were high.

Conclusions:

The database would be useful in the Indian context for researching facial emotion recognition. It has been validated among a group of experts and was found to have high inter-rater reliability.

Keywords: Emotion, facial expression, emotion recognition, valence, communication


Key Message:

Processing faces and facial expressions are crucial for all forms of social communication, and the interpretation of emotion is culturally dependent. Most existing databases are based on Caucasian, Mongoloid (Chinese, Japanese, Koreans), or African-American faces. Limited databases contain Indian faces. AIIMS Facial Toolbox for Emotion Recognition (AFTER) database would be useful in the Indian context for researching facial emotion recognition.

A wealth of interpersonally relevant information is gathered by observing faces and their expressions. Recognition of affective states, particularly the basic emotions, is a prerequisite of intact social behaviour. 1 Deficits in processing these perceptions could make appropriate reactions impossible.2,3 Images of faces and facial expressions are commonly used as stimulus materials in diverse research fields.

Facial expressions have been called the universal language of emotion. 4 The concept claims that all humans communicate six basic internal emotional states (happiness, anger, sadness, surprise, fear, and disgust) using similar facial movements, by virtue of their biological and evolutionary origins. On the contrary, many recent researchers have opposed the notion and suggested that perception and interpretation of emotion are culture-dependent.58 While classic studies demonstrated that emotion recognition was above-chance even for individuals from disparate cultures,9,10 they also mentioned that the recognition was more accurate when the emotions were both expressed and perceived by the members of similar culture. 11 The facial stimuli in existing databases tend to vary substantially in terms of facial feature characteristics and expression of emotions, depending on the representative culture from which the database was built. For example, Radboud Faces Database (RaFD), FACES database, and the Karolinska Directed Emotional Faces (KDEF) database contain only Caucasian models,1214 while the Racially Diverse Affective Expression (RADIATE) face stimulus and Tsinghua Facial Expression Database contain only Chinese faces.15,16

A few databases containing Indian faces have been developed, making an invaluable contribution to facilitating research on emotion processing.1720 However, the parameters of existing facial picture sets may not always satisfy the objectives of the experiment. For example, databases may have images of only a few actors 17 or less intense emotions as it was developed for computer-generated algorithms 18 or were developed long ago and the pictures are available in black and white only. 19 These inadequacies present a gap in the existing databases, generating a need for a more standardized toolbox that can be used by the brain research community in the Indian subcontinent. In this report, we present the development of a validated database for the recognition of emotions, containing static images of Indian faces.

Materials and Methods

Development of the Database

The data presented in the current paper on the development of static facial images are part of a larger study comprising the development of an entire toolbox for varying facets of emotion recognition. This cross-sectional study was conducted from March 2019 to August 2021 at the Department of Psychiatry, All India Institute of Medical Sciences, New Delhi, India. The Institute Ethics Committee had approved the study. The current database contains front-gazing portrait images of 15 participants. The recruited participants were models undergoing their final year of graduation from a professional drama college of India—National School of Drama (NSD) (six males and nine females; mean ± SD age = 26.2 ± 1.93 years).

Based on the previous literature, eight facial expressions were selected—neutral, happiness, anger, sadness, disgust, contempt, fear, and surprise. 21 The models expressed these emotions in high and low intensity. Each expression was shown with the eyes directed straight ahead. This accounted for 120 low-intensity and 120 high-intensity raw pictures (15 actors × 8 emotions × 2 intensities).

Before the photo shoot, the models were given detailed instructions about the targeted emotions. Beforehand, the models practiced all emotional expressions. During the photo shoot, each model took approximately 45 minutes to pose for all the emotions, during which they intermittently took breaks to transition from one emotion to another. Each model posed for the eight different facial expressions in high and low intensity as per the consensus amongst the expert participants. All individuals portrayed eight facial expressions along with a neutral expression. The photoshoot took place at NSD. The photos were taken against a light background in a brightly lit room. Throughout the photo shoot, a psychiatrist and a psychologist discussed each expression after clicking the photograph of an individual model. They proceeded further only after a consensus on the valid display of the concerned emotion was reached.

Apparatus: To capture the pictures, we used a digital single-lens reflex camera—Nikon D7000, with a resolution of 16.2 megapixels—and a fixed lens (Nikon 35mm f/1.8G, a DX-only lens). The pictures were captured in such a manner that the face filled the frame.

Image processing: All pictures were initially stored in the RAW format. Photos were then converted to TIFF format and corrected for white-balance using the free software packages UFRaw and Gimp. Next, all pictures were spatially aligned according to facial landmarks. The pictures were cropped and resized from 4928×3264 pixels to 1024×768 pixels.

Validation of the Database

Participants

The database was validated by two faculty members of NSD and 19 qualified mental health professionals from the Department of Psychiatry of a tertiary care hospital (11 males and 10 females; mean age =28.6 ± 12.1 years). All raters had a normal or corrected-to-normal vision and volunteered for no-cost participation.

Procedure

Raters were presented with randomized images of different facial expressions on a 14-inch computer screen. They were seated 50–70 cm away from the screen and proceeded at their own pace in the presence of a researcher (NPS or NK). Raters were asked to recognize the expression that was the best fit for the emotion depicted in the image, choosing one of the nine response categories: happiness, anger, sadness, neutral, surprise, fear, contempt, disgust, and other.

Subsequently, the raters judged the dimensional aspect of the image on a 5-point Likert rating for the: (a) intensity of the expression (weak to strong); (b) clarity of the expression (unclear to clear); and (c) the genuineness of the expression (fake to genuine).

Analysis

Facial expression recognition was evaluated by hit rate percentage with standard deviation (SD), where the proportion of raters who agree with the intended expression was calculated for each emotion.19,21,22,23 Initially, for each image, hit rates were calculated for emotion recognition. Then, 11 images of each emotion with the highest hit rates were selected, and mean hit rate for the emotion was calculated from these image sets.

In addition, we also calculated Fleiss’ kappa, which is a chance-corrected measure of agreement between the intended expression and the raters’ labels. For each image, we also calculated the mean judgments on dimensions of clarity, intensity, and genuineness.

Results

All individuals in the data set portrayed eight facial expressions along with a neutral expression. The validation started with two experts (faculty from NSD) who assessed the high- and low-intensity images individually. The expert consensus for recognition of low-intensity emotion was very low. Hence, the low-intensity emotional expression pictures were excluded and were not assessed further. Finally, a total of 120 facial images were assessed by 21 raters on four ratings for the measures of expression, intensity, clarity, and genuineness.

Emotion Recognition

For each image, we calculated how many participants chose the correct emotion label. There was discrepancy in accuracy rates amongst models in portraying a given emotion. We calculated the rates for each image and finally selected 11 images with the highest hit rates for each emotion to be included in the database—AIIMS Facial Toolbox for Emotion Recognition (AFTER). The hit rate of each selected image for individual emotion is provided in Table S1. The overall mean hit rate percentage of all emotions across the database was 75.5% (SD = 24.00). The percentage hit rates for each emotion are depicted in Table 1. The hit rate for contempt was very low—20% (SD = 13.94); hence, the contempt emotion was removed from further analysis. The new mean hit rate percentage of all the emotions after the exclusion of contempt was 84.3% (SD = 8.67). Furthermore, the mean kappa for emotional expression was 0.68. Happiness, anger, surprise, neutral, and fear expressions had mean proportion correct scores ranging between 0.70 and 0.89, whereas the kappa scores for disgust and sadness were relatively lower (Table 2).

Table 1.

Hit Rate for Emotion Recognition and Dimensional Scores

Emotion Average Hit Rates
% (SD)
Intensity
Mean ± SD
Clarity
Mean ± SD
Genuineness
Mean ± SD
Anger 85.3 (8.64) 3.80 ± 0.52 3.77 ± 0.47 3.63 ± 0.33
Contempt 20.3 (13.94) 2.96 ± 0.34 3.00 ± 0.32 3.34 ± 0.27
Disgust 76.6 (15.42) 4.03 ± 0.52 3.87 ± 0.49 3.80 ± 0.32
Happiness 97.8 (2.49) 4.27 ± 0.42 4.44 ± 0.28 4.30 ± 0.38
Neutral 84.4 (11.87) 3.34 ± 0.15 3.57 ± 0.27 3.72 ± 0.15
Sadness 74.5 (19.64) 3.37 ± 0.47 3.39 ± 0.39 3.43 ± 0.43
Surprise 93.1 (8.61) 4.08 ± 0.29 3.96 ± 0.36 3.86 ± 0.32
Fear 78.4 (9.13) 4.14 ± 0.25 3.96 ± 0.27 4.02 ± 0.25
Total 3.75 ± 0.58 3.75 ± 0.53 3.76 ± 0.42

SD: standard deviation.

Table 2.

Inter-rater Agreement for Individual Emotion Recognition

Rating Category Conditional Probability Kappa Asymptotic
Asymptotic 95% Confidence Interval
Standard Error Z P Value Lower Bound Upper Bound
Happiness 0.91 0.89 0.01 113.09 <0.01 0.87 0.91
Surprise 0.77 0.74 0.01  93.91 <0.01 0.72 0.75
Neutral 0.76 0.73 0.01  92.71 <0.01 0.71 0.74
Disgust 0.65 0.61 0.01  77.67 <0.01 0.59 0.62
Sadness 0.66 0.62 0.01  79.13 <0.01 0.61 0.63
Anger 0.75 0.72 0.01  91.51 <0.01 0.70 0.73
Fear 0.73 0.70 0.01  88.69 <0.01 0.68 0.71
Other 0.06 0.02 0.01   2.43  0.01 0.01 0.03

Discussion

This paper presents a new database of Indian faces with seven facial expressions (happiness, anger, sadness, disgust, surprise, fear, and neutral). The pictures were validated for expression recognition and rated over three dimensions: intensity, clarity, and genuineness. This enables an assessment and standardization of the quality of the data set.

We calculated the inter-rater consensus for absolute emotion recognition as indexed through the hit rate and Fleiss’ kappa coefficient. The mean hit rate of overall emotion recognition was 84.3%, which is comparable to or even higher than other international databases. The scores are comparable to those reported by the databases for the Pictures of Facial Affect with a mean accuracy of 88% 24 ; FACES with a mean hit rate of database ranging from 67% to 96% for different emotions 12 ; RaFD with average percentage correct response being 82% 13 ; and KDEF database with a mean hit rate of 71.87%.18,25

In the context of other Indian emotional faces databases, to the best of our knowledge, only two groups have conducted a validation study. One of the databases, Tool for Recognition of Emotions in Neuropsychiatric Disorders (TRENDS), reported a higher hit rate (80% to 100%) with an inter-rater agreement of 60% and internal consistency of 0.669 using Cronbach’s alpha.17,26 However, the pictures were evaluated on a “forced choice” design where the raters had to select one of the emotions from a predetermined list. The relative merits of such studies have been hotly debated. Researchers have suggested that such studies prime participants to interpret stimuli as expressions of emotion and inflate agreement by constraining choices. 27 It has been observed that forced choice can lead to consensus on clearly incorrect categories when relevant choices are missing from the list.2830 In our study, we overcame this bias by adding one more category, “other,” where the raters could freely label the expression. The authors of TRENDS have reported the reliability of the database but have not mentioned the inter-rater variability. The models depicting the emotions were only 4 compared to the current database having 15 models. The percentage of correct responses was reported in the current study for each individual image by every rater, reducing the possibility of false high consensus amongst the raters for a given image. It is possible that for a few faces (as mentioned in the supplementary table), the depiction of sad and disgust emotions is difficult to ascertain with preciseness, leading to the somewhat lower values of kappa for these emotions as compared to other mentioned emotions. Choosing images with less percentage hit rates for “other emotion” would be more accurate in depicting the emotion in future research.

Another widely used Indian database is by Mandal 19 has five photographs each for six emotions (anger, disgust, happiness, sadness, surprise, fear), for which 70% of raters (out of 100) were unanimous. However, the pictures were rated on a 7-point scale for evaluating the intensity rather than recognition of emotion, with neutral or no emotion at one end of the scale and only the intended emotion at the other.

One other database is the Indian Spontaneous Expression Database for Emotion Recognition. 18 The expression annotation and intensity of each expression were decided by taking the average ratings of four decoders. The reliability of agreement between the four raters, evaluated using Fleiss’ kappa coefficient, was 0.85. Apart from the models not being experts in expressing emotions, the videos for eliciting emotions were not standardized. The authors classified for recognition of only four emotions, using machine learning algorithms, and that could have led to a higher propensity of type I error.

Notably, for the contempt emotion, the hit rate for expression recognition was substantially low (20.3%) and hence it was excluded from the analysis. This finding is in accordance with previous studies reporting that the recognition of contempt is the lowest among the emotions.13,31 Literature has divergent views on naming contempt as being a universal emotion.32,35 The expression and recognition of contempt are highly dependent on culture and context. 5 Hence, most of the facial emotion databases do not include contempt.

Apart from validating the database, we also assessed the ratings for each emotion on the dimensions of intensity, clarity, and genuineness. All emotions had high mean scores on these dimensional constructs, except for contempt. Individuals with a deficit in emotion recognition, such as those with schizophrenia, have difficulty recognizing less intense expressions. 19 Expressions without clear cues require more attention to decode, while extreme expressions have the advantage of amalgamating several facial cues, which helps structure the visual field despite less attention. Probably, these factors have a direct bearing on performance in recognition of extreme expressions. The current dataset can be assumed to be composed intense, clear, and genuine depictions of emotions, suggesting its possible use in future research on emotion recognition.

Based on the hit rates and the good inter-rater reliability, it might be concluded that the AFTER database offers a valid set of affective stimuli for recognizing emotions. These static pictures can be used freely for emotion research. Researchers can select pictures as a function of parameters like the quality of the emotional expression, hit rate, intensity, clarity, and genuineness.

The current database is limited by not including faces of different age groups. However, most databases worldwide have utilized faces of models of adult age group only. Literature suggests that it is more difficult to identify an emotion expressed by an older than a younger face. 36 We did not analyze the database for gender-based discrimination of images, which may be attempted in a further posthoc analysis of the database. The targeted emotional expressions were based on expert consensus and not on standard prototypes such as defined by the Facial Action Coding System. 37 The low-intensity images were not included in the database as the expert consensus for recognition of low-intensity emotion was very low. Dynamic stimuli can cover the whole range of emotions that more closely mimic real-life situations. Reading and listening are other aspects of emotion recognition that reflect real-life scenarios. The current study reports a static image database; however, the final overall emotional toolbox will comprise these other facets of emotion recognition. Also, the validation has been performed by a small number of qualified mental health professionals. These professionals are trained to perceive emotions better than the general population. Further validation is required in different sub-populations for the database to be extended to the larger general population. Future studies may develop databases with a differential intensity of emotion expressions to understand the impact of varying intensity on emotion recognition.

The clinical utility of the AFTER database could be in ascertaining the emotion recognition capability of individuals with various neuropsychiatric conditions, assessing change in emotion perception when patients move from an acute to a remitted state, or predicting the likelihood of relapse. Being a computerized version, it may find use in developing emotion-recognition-related tasks in brain imaging studies of emotion recognition with techniques such as functional magnetic resonance imaging, functional near infra-red spectroscopy, quantitative electroencephalography, and eye-tracking studies, with a more culturally relevant stance.

Conclusions

The AFTER database would be useful in the Indian context for conducting research in the field of emotion recognition. Such a culturally sensitive database may be useful to capture the perception of emotion from an ethnic perspective. AFTER has been validated in a cohort of experts and is found to have good inter-rater reliability. This database shows promise for use in research settings and needs to be validated in the general population.

Supplemental Material

Supplemental material for this article is available online.

Acknowledgments

The authors thank Ms Neha Behal and Mr Hemant Kumar Upadhayay for providing support in the data collection process.

Footnotes

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The study is sponsored through the Cognitive Science Research Initiative (CSRI) grant of Department of Science and Technology (DST) (D.O. No. DST/CSRI/2017/186 dated April 12, 2018).

References

  • 1.Wells LJ, Gillespie SM, and Rotshtein P.. Identification of emotional facial expressions: effects of expression, intensity, and sex on eye gaze. PLOS One. 2016; 11(12): e0168307. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Baggio HC, Segura B, Ibarretxe-Bilbao N, et al. Structural correlates of facial emotion recognition deficits in Parkinson’s disease patients. Neuropsychologia, 2012; 50(8): 2121–2128. [DOI] [PubMed] [Google Scholar]
  • 3.Sachs G, Steger-Wuchse D, Kryspin-Exner I, et al. Facial recognition deficits and cognition in schizophrenia. Schizophr Res, 2004; 68(1): 27–35. [DOI] [PubMed] [Google Scholar]
  • 4.Ekman P. An argument for basic emotions. Cogn Emot, 1992; 6(3–4): 169–200. [Google Scholar]
  • 5.Elfenbein HA and Ambady N.. On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychol Bull, 2002; 128(2): 203–235. [DOI] [PubMed] [Google Scholar]
  • 6.Engelmann JB and Pogosyan M.. Emotion perception across cultures: the role of cognitive mechanisms. Front Psychol, 2013; 4: 118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Jack RE, Garrod OGB, Yu H, Caldara R, and Schyns PG. Facial expressions of emotion are not culturally universal. Proc Natl Acad Sci U S A, 2012; 109(19): 7241–7244. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Gendron M, Crivelli C, and Barrett LF. Universality reconsidered: diversity in making meaning of facial expressions. Curr Dir Psychol Sci, 2018; 27(4): 211–219. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Ekman P. Universals and cultural differences in facial expressions of emotion. In: Cole J. (Ed.) Nebraska symposium on motivation. Lincoln: University of Nebraska Press, 1972; 19, pp. 207–282. [Google Scholar]
  • 10.Ekman P, Sorenson ER, and Friesen WV. Pancultural elements in facial displays of emotions. Science, 1969; 164: 86–88. [DOI] [PubMed] [Google Scholar]
  • 11.Izard CE. The face of emotion. New York: Appleton-Century-Crofts, 1971. [Google Scholar]
  • 12.Ebner NC, Riediger M, and Lindenberger U. FACES—A database of facial expressions in young, middle-aged, and older women and men: development and validation. Behav Res Methods, 2010; 42(1): 351–362. [DOI] [PubMed] [Google Scholar]
  • 13.Langner O, Dotsch R, Bijlstra G, Wigboldus DH, Hawk ST, and Van Knippenberg AD. Presentation and validation of the Radboud Faces Database. Cogn Emot, 2010; 24(8): 1377–1388. [Google Scholar]
  • 14.Lundqvist D, Flykt A, and Ohman A. Karolinska Directed Emotional Faces. Stockholm: Psychology Section, Department of Clinical Neuroscience, Karolinska Institutet, 1998. [Google Scholar]
  • 15.Conley MI, Dellarco DV, Rubien-Thomas E.. et al. The racially diverse affective expression (RADIATE) face stimulus set. Psychiatry Res, 2018; 270: 1059–1067. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Yang T, Yang Z, Xu G, et al. Tsinghua facial expression database—a database of facial expressions in Chinese young and older women and men: development and validation. PLoS One, 2020; 15: e0231304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Behere RV, Raghunandan VNGP, Venkatasubramanian G, Subbakrishna DK, Jayakumar PN, and Gangadhar BN. TRENDS: a tool for recognition of emotions in neuropsychiatric disorders. Indian J Psychol Med, 2008; 30(1): 32–38. [Google Scholar]
  • 18.Happy SL, Patnaik P, Routray A, and Guha R.. The Indian Spontaneous Expression Database for emotion recognition. IEEE Trans Affect Comput, 2017; 8(1): 131–142. [Google Scholar]
  • 19.Mandal MK. Decoding of facial emotions, in terms of expressiveness, by schizophrenics and depressives. Psychiatry, 1987; 50(4): 371–376. [DOI] [PubMed] [Google Scholar]
  • 20.Sharma U and Bhushan B.. Development and validation of Indian Affective Picture Database. Int J Psychol, 2019; 54(4): 462–467. [DOI] [PubMed] [Google Scholar]
  • 21.Wang L and Markham R.. The development of a series of photographs of Chinese facial expressions of emotion. J Cross Cult Psychol, 1999; 30(4): 397–410. [Google Scholar]
  • 22.Mandal MK, Harizuka S, Bhushan B, and Mishra RC. Cultural variation in hemifacial asymmetry of emotion expressions. Br J Soc Psychol, 2001; 40(3): 385–398. [DOI] [PubMed] [Google Scholar]
  • 23.Shrout PE and Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psychol Bull, 1979; 86(2): 420. [DOI] [PubMed] [Google Scholar]
  • 24.Ekman P and Friesen WV. Measuring facial movement. Environ Psychol Nonverbal Behav, 1976; 1(1): 56–75. [Google Scholar]
  • 25.Goeleven E, De Raedt R, Leyman L, and Verschuere B.. The Karolinska directed emotional faces: a validation study. Cogn Emot, 2008; 22(6): 1094–1118. [Google Scholar]
  • 26.Behere RV. Facial emotion recognition deficits: The new face of schizophrenia. Indian J Psychiatry, 2015; 57(3): 229–235. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Elfenbein HA, Mandal MK, Ambady N, Harizuka S, and Kumar S. Cross-cultural patterns in emotion recognition: highlighting design and analytical techniques. Emotion 2002; 2(1): 75–84. [DOI] [PubMed] [Google Scholar]
  • 28.Rosenberg E and Ekman P.. Conceptual and methodological issues in the judgment of facial expressions of emotion. Motiv Emot, 1995; 19: 111–138. [Google Scholar]
  • 29.Russell JA. Forced-choice response format in the study of facial expression. Motiv Emot, 1993; 17: 41–51. [Google Scholar]
  • 30.Wagner HL. The accessibility of the term “contempt” and the meaning of the unilateral lip curl. Cogn Emot, 2000; 14: 689–710. [Google Scholar]
  • 31.Dores AR, Barbosa F, Queirós C, Carvalho IP, and Griffiths MD. Recognizing emotions through facial expressions: a largescale experimental study. Int J Environ Res and Public Health, 2020; 17(20): 7420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Izard CE and Haynes OM. On the form and universality of the contempt expression: A challenge to Ekman and Friesen’s claim of discovery. Motiv Emot, 1988; 12(1): 1–16. [Google Scholar]
  • 33.Matsumoto D. American-Japanese cultural differences in the recognition of universal facial expressions. J Cross Cult Psychol, 1992; 23(1): 72–84. [Google Scholar]
  • 34.Ricci Bitti PE, Brighetti G, Garotti PL, and Boggi-Cavallo P. Is contempt expressed by pancultural facial movements? In: Forgas JP and Innes JM, (Eds.) Recent advances in social psychology: An international perspective. Amsterdam: Elsevier, 1989, pp. 329–339. [Google Scholar]
  • 35.Russell JA. Negative results on a reported facial expression of contempt. Motiv Emot, 1991; 15: 285–292. [Google Scholar]
  • 36.Grondhuis SN, Jimmy A, Teague C, and Brunet NM. Having difficulties reading the facial expression of older individuals? Blame it on the facial muscles, not the wrinkles. Front Psychol, 2021; 12: 620768. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Frith C. Role of facial expressions in social interactions. Philos Trans R Soc B Biol Sci, 2009; 364(1535): 3453–3458. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental material for this article is available online.


Articles from Indian Journal of Psychological Medicine are provided here courtesy of Indian Psychiatric Society South Zonal Branch

RESOURCES