Skip to main content
. 2022 Dec 5;16(4):388–410. doi: 10.1590/1980-5764-dn-2022-0039

Table 2. Methodological characteristics used in the studies to create the databases.

Authors and year of publication Name of the database elaborated Method used to elicit the emotions Patterns in stimulus capture Criteria used in the validation stage for inclusion of stimuli in the final database Sample characteristics in the stage for the validation of the stimuli Psychometric properties assessed
Benda and Scherf. (2020) 25 Complex Emotion Expression Database (CEED) 1) Presentation of an equivalent photograph expressing the emotion 2) Emotions elicited from specific situations
  • Background: White

  • Clothes: ND

  • Distractors removed: ND

Accuracy ≥50% 796 volunteers recruited through MTurk
  • Age: 34. years; SD=11.6

  • Gender: M=403; F=388

  • Race: ND

  • Analysis of the items: Accuracy*

  • Validity evidence: Content-based: Accuracy and error in each item

Chung et al. (2019) 26 Yonsei Face Database (YFace DB) 1) Presentation of an equivalent photograph expressing the emotion 2) Instruction on muscle movement of the emotions based on the FACS 3) Emotions elicited from specific situations
  • Background: White

  • Clothes: Black T-shirt

  • Distractors removed: Beards, glasses, makeup, and bangs

Accuracy, intensity, and naturalness 212 students from the Seoul University
  • Age: 18-28 years

  • Gender: M=97; F=115

  • Race: ND

  • Analysis of the items: Accuracy

  • Precision: Accuracy

  • Validity evidence: Content-based: Accuracy Based on the relationship with other variables: ANOVA for difference in precision between genders of the stimuli and evaluators, t-test for difference in mean accuracy between genders and emotions, and post-hoc Bonferroni analysis for items with significant differences‡

Conley et al. (2018) 16 The racially diverse affective expression (RADIATE) Presentation of an equivalent photograph expressing the emotion
  • Background: White

  • Clothes: White sheet

  • Distractors removed: Glasses, headband, hats

Accuracy and Cohen's kappa 662 participants recruited through MTurk
  • Age: 18-35 years (27.6 years; SD=3.8)

  • Gender: M=402; F=260

  • Race: Asian (n=48), Black/African-American (n=70), Caucasian (n=470), Hispanic (n=63), and others (n=11)

  • Precision: Reliability (test-retest)

  • Validity evidence: Content-based: Accuracy; Cohen's kappa and variability in precision by race of the model

Dalrymple et al. (2013) 27 The Dartmouth Database of Children's Faces Emotions elicited from specific situations
  • Background: Black

  • Clothes: Black dresses and black hats

  • Distractors removed: Glasses and jewelry

Images recognized with ≥70% accuracy 163 students and members of the Dartmouth College academic community
  • Age: 19.6 years; SD=4.15

  • Gender: M=67; F=96

  • Race: ND

  • Precision: Accuracy and Cohen's kappa among the evaluators

  • Validity evidence: Content-based: Accuracy and Cohen's kappa among the evaluators Based on the relationship with other variables: ANOVA for difference in precision between gender of the stimuli and evaluators‡

Donadon et al. (2019) 28 Baby Faces The parents were instructed and trained to provoke the intended emotions ND Rasch model to minimize floor and ceiling effects with values from 0.50 to 1.50 Rate of correct answers according to Kringelbach et al. 2008 64 Validation 119 volunteers from the community
  • Age: 36 years; SD=12.8

  • Gender: M=36.1%; F=63.9%

  • Race: Caucasian (n=69.7%), Black (n=26.1%), and Japanese (n=4.2%) Retest 31 volunteers from the community

  • Age: 38.06 years; SD=11.57

  • Gender: M=35.5%; F=64.5%

  • Race: Caucasian (n=74%), Black (n=19.5%), and Japanese (n=6.5%)

  • Analysis of the items: Adjustment and difficulty of the items by the Rasch model

  • Precision: Reliability (test-retest)

  • Validity evidence: Content-based: Accuracy§ Based on the relationship with other variables: ANCOVA to assess the differences between groups considering the sociodemographic variables (gender, race, schooling level of the adults, and gender and race of the faces in the stimulus)‡

Ebner et al. (2010) 13 Faces--a life-span Database of Facial Expressions 1) Emotion induction through photographs and videos 2) Emotions elicited from specific situations
  • Background: Gray

  • Clothes: Gray T-shirt

  • Distractors removed: Jewelry, glasses, and makeup

Agreement among evaluators for (1) purity of the facial expression and (2) high intensity facial expression 154 students
  • Age: 20-81 years

  • Gender: M=78; F=76

  • Race: Caucasian

  • Precision: Accuracy and consensus among the evaluators

  • Validity evidence: Content-based: Accuracy and consensus among the evaluators Based on the relationship with other variables: ANOVA for face age × evaluator's age × emotion expressed‡

Egger et al. (2011) 29 NIMH Child Emotional Faces Picture Set (NIMH-ChEFS)
  • Background: Gray

  • Clothes: ND

  • Distractors removed: ND

The cutoff point for the image to be included was that ≥15 evaluators identified the intended emotion 20 professors and employees of the Duke University Medical Center
  • Age: 38.3 years

  • Gender: M=7; F=13

  • Race: ND

  • Analysis of the items: Accuracy

  • Difficulty of the items: Intensity and representativeness scores

  • Precision: Agreement among the evaluators//

  • Validity evidence: Content-based: Accuracy and agreement among the evaluators

Ekman and Friesen. (1976) 30 Pictures of Facial Affect (POFA) Instruction on muscle movement of the emotions based on FACS ND ND ND ND
Fujimura and Umemura (2018) 31 A facial expression database based on the dimensional and categorical model of emotions 1) Emotions elicited from specific situations 2) Instruction on muscle movement of the emotions based on FACS
  • Background: White

  • Clothes: White T-shirt

  • Distractors removed: Glasses and strong makeup

Agreement among the evaluators Mean of 69% agreement among the evaluators (SD=21%) 39 university students
  • Age: 21.33 years; SD=2.39

  • Gender: M=19; F=20

  • Race: Japanese natives

  • Precision: Accuracy

  • Validity evidence: Content-based: Accuracy and confusion matrix of agreement rates for images of dynamic and static expressions of each model

Franz et al. (2021) 32 Picture-Set of Young Children's Affective Facial Expressions (PSYCAFE) 1) Guidance of emotions in theater workshops 2) Directed Facial Action Task used to guide the movement of anatomical landmarks
  • Background: White

  • Clothes: ND (just face)

  • Distractors removed: ND (just face)

Step 1 Confirmatory hierarchical cluster analysis by Ward Step 2 Intensity, authenticity, and likeability. Accuracy (77-100%) and AFFDEX Software Step 1 197 volunteers from the community
  • Age: 32.9 years; SD=16.1

  • Gender: M=33%; F=67%

  • Race: ND Step 2 44 volunteers from the community

  • Age: 25.7 years; SD=5.9)

  • Gender: M=48%; F=52%

  • Race: ND

  • Precision: Accuracy

  • Validity evidence: Based on the relationship with other variables: Stimulus age × expressed emotion × accuracy

Garrido et al. (2017) 33 Stills and Videos of facial Expressions (SAVE database) Emotions elicited from specific situations
  • Background: Gray

  • Clothes: White T-shirt

  • Distractors removed: Jewelry, glasses, and makeup

Stimuli with an assessment of 2.5 SD above or below the mean 120 university students
  • Age: 20.62 years; SD=3.39

  • Gender: M=22.5%; F=77.5%

  • Race: Caucasian

  • Precision: Accuracy

  • Validity evidence: Content-based: Accuracy and interest dimensions (valence, excitement, clarity, intensity, appeal, similarity, and familiarity) Based on the relationship with other variables: Accuracy × gender of the model and the participant

Giuliani et al. (2017) 15 The DuckEES child and adolescent dynamic facial expressions stimulus set Emotions elicited from specific situations
  • Background: White

  • Clothes: ND

  • Distractors removed: ND

Images recognized with ≥70% accuracy 36 volunteers from the Oregon University
  • Age: 19.5 years; SD=1.95

  • Gender: M=14; F=22

  • Race: ND

  • Precision: Accuracy

  • Validity evidence: Content-based: Accuracy and Fleiss’ kappa

Happy et al. (2015) 34 The Indian Spontaneous Expression Database for Emotion Recognition (ISED) Emotion induction through videos
  • Background: ND

  • Clothes: ND

  • Distractors removed: ND

Agreement among the evaluators (Fleiss’ Kappa) Four trained evaluators
  • Age: ND

  • Gender: M=2; F=2

  • Race: ND

  • Precision: Accuracy

  • Validity evidence: Content-based: Accuracy and Fleiss’ kappa

Kaulard et al. (2012) 35 The MPI Facial Expression Database Emotions elicited from specific situations
  • Background: Black

  • Clothes: Black cape and hats

  • Distractors removed: Makeup and beards

Consistency among the evaluators (Fleiss’ Kappa) 20 German natives
  • Age: 19-33 years

  • Gender: M=10; F=10

  • Race: ND

  • Precision: Accuracy

  • Validity evidence: Content-based: Accuracy and Fleiss’ kappa

Keutmann et al. (2015) 36 Visual and vocal emotional expressions of adult and child actors Emotions elicited from specific situations
  • Background: Green

  • Clothes: ND

  • Distractors removed: ND

Accuracy 510 students, 226 from Drexel University and 284 from the University of Central Florida
  • Age: ND

  • Gender: ND

  • Race: ND

  • Analysis of the items: Difficulty analysis and item discrimination by means of the classical test theory

  • Precision: Accuracy

  • Validity evidence: Content-based: Accuracy

Kim et al. (2017) 37 Korea University Facial Expression Collection – Second Edition (KUFEC-II) Instruction on muscle movement of the emotions based on FACS
  • Background: Gray

  • Clothes: Pattern

  • Distractors removed: Makeup, accessories, and dyed hair

Internal consistency Accuracy 75 evaluators
  • Age: 19-69 years (26.17 years, SD=5.69)

  • Gender: M=39; F=36

  • Race: ND

  • Precision: Accuracy

  • Validity evidence: Content-based: Accuracy; agreement among the evaluators and scores for purity, valence, and intensity Based on the relationship with other variables: ANOVA to test the effects of gender on recognition‡ and correlations between the participant's emotional state and task performance

Langner et al. (2010) 38 Radboud Faces Database Instruction on muscle movement of the emotions based on FACS
  • Background: White

  • Clothes: Black T-shirt

  • Distractors removed: Glasses, earrings and makeup

Accuracy 276 students from Radboud University
  • Age: 21.2 years; SD=4.0

  • Gender: M=38; F=238

  • Race: ND

  • Precision: Accuracy

  • Validity evidence: Content-based: Accuracy and dimensions of interest (type of expression, intensity, clarity, genuineness, and valence) Based on the relationship with other variables: ANOVA comparing each of the precision variables with age, gender, expression, and gaze direction‡

LoBue and Thrasher. (2015) 14 The Child Affective Facial Expression (CAFE) Instruction on muscle movement of the emotions based on FACS was carried out during improvised games
  • Background: White

  • Clothes: White sheet

  • Distractors removed: ND

Images recognized with ≥60% accuracy
  • 100 undergraduate students from Rutgers University

  • Age: ND

  • Gender: M=50; F=50

  • Race: African-American (n=17%), Asian (n=27%), White (n=30%), Latin (n=17%), and others (n=9%)

  • Analysis of the items: Difficulty of the items: Rasch model

  • Precision: Test-retest reliability and accuracy#

  • Validity evidence: Content-based: Accuracy

Lundqvist et al. (1998) 39 Karolinska Directed Emotional Faces (KDEF) Database The participants were free to express the emotion as they wished Background: Neutral Clothes: Gray T-shirt Distractors removed: Beard, mustache, earrings, glasses, and makeup ND ND ND
Ma et al. (2020) 40 Han, Hui, and Tibetan Chinese facial expression database 1) Emotion induction through photographs and videos 2) Instruction on muscle movement of the emotions based on FACS
  • Background: Black

  • Clothes: ND

  • Distractors removed: Jewelry

Images recognized with ≥60% accuracy
  • 240 volunteers (80 from each study region)

  • Age: 23 years; SD=1.7

  • Gender: M=120; F=120

  • Race: Chinese

  • Precision: Accuracy** and method of halves

  • Validity evidence: Content-based: Accuracy Based on internal consistency: Cronbach's alpha

Ma et al. (2015) 41 Chicago Face Database (CFD) 1) Emotions expressed from verbal instructions 2) Presentation of an equivalent photograph expressing the emotion
  • Background: White

  • Clothes: Gray T-shirt

  • Distractors removed: ND

Two independent judges assessed how believable the expression was on a Likert scale from 1 to 9 (1=not at all believable; 9=very believable) 1,087 evaluators (convenience sample)
  • Age: 26.7 years; SD=10.5

  • Gender: M=308; F=552

  • Race: White (n=516), Asian (n=117), Black (n=74), bi–or multi-race (n=72), Latin (n=57), others (n=18), and did not report (n=233)

  • Precision: Accuracy

  • Validity evidence: Based on the internal structure: exploratory factor analysis (Varimax rotation) Content-based: Accuracy; agreement among the evaluators and effects of race and gender of the stimuli (criteria for item construction)

Maack et al. (2017) 42 The Tromso Infant Faces Database (TIF) The parents were instructed to elicit the intended emotions with games and specific stimuli
  • Background: White

  • Clothes: White overalls and hat

  • Distractors removed: ND

The photographs with best agreement among the evaluators were selected Mean classification of clarity and intensity below 2.5 Validation: (a) expression portrayed, (b) clarity of expression, (c) intensity of the expression, and (d) valence of the expression 720 participants
  • Age: 18-70 years (32.8 years; SD=10.4)

  • Gender: M=21%; F=79%

  • Race: ND

  • Precision: Accuracy††

  • Validity evidence: Content-based: dimensions of interest (type of expression, clarity, intensity, and valence) Based on the relationship with other variables: ANOVA to compare performance × child-rearing stage × gender × mood

Meuwissen et al. (2017) 43 Developmental Emotional Faces Stimulus Set (DEFSS) 1) Emotions elicited from specific situations 2) Presentation of an equivalent photograph expressing the emotion
  • Background: Gray

  • Clothes: ND

  • Distractors removed: Jewelry

The images recognized by less of 55% of the evaluators were excluded 228 university students between undergraduate and graduate levels and children preappointed by the family via the Internet
  • Age: 8-30 years

  • Gender: M=150; F=254 Race: White (n=81%), non-White (n=17%)

  • Precision: Accuracy‡‡

  • Validity evidence: Content-based: correct answers by age group, intensity, and emotion

Minear and Park. (2004) 44 A life span database of adult facial stimuli Emotions expressed from verbal instructions
  • Background: Gray

  • Clothes: ND

  • Distractors removed: ND

ND ND ND
Negrão et al. (2021) 45 The Child Emotion Facial Expression Set 1) Presentation of an equivalent photograph expressing the emotion 2) Emotions elicited from specific situations
  • Background: White

  • Clothes: White

  • Distractors removed: ND

Step 1: 100% agreement between two evaluators Step 2: 100% agreement between other two evaluators (two of each step) Four judges
  • Age: ND

  • Gender: ND

  • Race: ND

  • Precision: Accuracy and Cohen's kappa

  • Validity evidence: Based on the relationship with other variables: accuracy × gender × age; emotion × race‡

Novello et al. (2018) 46 Youth Emotion Picture Set 1) Emotions elicited from specific situations 2) Presentation of an equivalent photograph expressing the emotion 3) Presentation of videos and a game to specifically elicit the emotion of anger
  • Background: ND

  • Clothes: Black cape

  • Distractors removed: Jewelry

Images recognized with ≥75% accuracy Adults: 101 volunteers recruited through the snowball method
  • Age: 18-77 years

  • Gender: M=31.7%; F=68.3%

  • Race: ND Adolescents: 54 volunteers from state schools

  • Age: 12-17 years

  • Gender: M=40.7%; F=59.3%

  • Race: ND

  • Precision: Accuracy and Cohen's kappa

  • Validity evidence: Based on the relationship with other variables: comparison of performance by age

O'Reilly et al. (2016) 47 The EU-Emotion Stimulus Set Emotions elicited from specific situations
  • Background: White

  • Clothes: ND

  • Distractors removed: ND

Accuracy 1,231 volunteers
  • Age: 44 years; SD=16.7

  • Gender: M=428; F=803

  • Race: ND

  • Precision: Accuracy_ and Cohen's kappa

  • Validity evidence: Content-based: performance comparison by expression type, valence, and excitation

Olszanowski et al. (2015) 48 Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) Instruction on muscle movement of the emotions based on FACS
  • Background: White

  • Clothes: Black T-shirt

  • Distractors removed: Beards, mustaches, earrings, and glasses

Agreement in recognition 1,362 participants
  • Age: 26.6 years; SD=11.6

  • Gender: M=261; F=1,101

  • Race: ND

  • Precision: agreement among the evaluators

  • Validity evidence: Content-based: purity analysis and intensity coefficient

Passareli et al. (2018) 49 Facial Expression Recognition Test (FERT) Presentation of an equivalent photograph expressing the emotion
  • Background: Black

  • Clothes: Black T-shirt

  • Distractors removed: ND

Unidimensional model 794 volunteers from the community
  • Age: 36.13 years; SD=13.79

  • Gender: M=36.2%; F=63.8%

  • Race: ND

  • Validity evidence: Based on the internal structure: factor analysis through the two-parameter Bayesian model

  • Based on the relationship with other variables; performance comparison between gender and age‡

  • Analysis of the items: Discrimination and difficulty through the Item Response Theory (IRT)

Romani-Sponchiado et al. (2015) 50 Child Emotions Picture Set Emotion induction through videos
  • Background: ND

  • Clothes: ND

  • Distractors removed: ND

Images recognized with ≥60% accuracy 30 psychologists with experience in child development
  • Age: ND

  • Gender: ND

  • Race: ND

  • Precision: Accuracy** and Fleiss’ Kappa

  • Analysis of the items: Accuracy

  • Validity evidence: Content-based: Fleiss’ kappa; chi-square to compare the proportion of posed and spontaneous photographs

Samuelsson et al. (2012) 51 Umeå University Database of Facial Expressions Instruction on muscle movement of the emotions based on FACS
  • Background: ND

  • Clothes: ND

  • Distractors removed: Makeup

Accuracy 526 participants
  • Age: 18-73 years (37.7 years; SD=13.0)

  • Gender: M=157; F=369

  • Race: ND

  • Precision: Accuracy

  • Validity evidence:

  • Based on the relationship with other variables; performance comparison by gender and age

Sharma and Bhushan. (2019) 52 Indian Affective Picture 1) Presentation of an equivalent photograph expressing the emotion 2) Emotions elicited from specific situations
  • Background: ND

  • Clothes: ND

  • Distractors removed: Beards, glasses, and makeup

Accuracy Intensity (9-point scale) 350 undergraduate students
  • Age: 20.58 years; SD=1.13

  • Gender: M=320; F=30

  • Race: ND

  • Analysis of the items: Accuracy

  • Validity evidence: Based on the relationship with other variables: t-test to compare men's and women's performance

Tottenham et al. (2009) 12 The NimStim set of facial expressions Emotions expressed from verbal instructions
  • Background: ND

  • Clothes: ND

  • Distractors removed: Makeup

Validity (accuracy and Cohen's kappa) and reliability Group 1 47 university students
  • Age: 19.4 years (SD=1.2)

  • Gender: M=39; F=47

  • Race: European-American (81%), African-American (6%), Asian-American (9%), and Hispanic-American (4%) Group 2 34 volunteers from the community

  • Age: 25.8 years (SD=4.1)

  • Gender: M=22; F=12

  • Race: European-American (59%), African-American (18%), Asian-American (6%), Hispanic-American (6%), and other races (12%)

  • Precision: Accuracy and test-retest

  • Validity evidence: Content-based: Accuracy and test-retest

Tracy et al. (2009) 53 University of California, Davis, Set of Emotion Expressions (UCDS) Instruction on muscle movement of the emotions based on FACS
  • Background: Gray

  • Clothes: White T-shirt

  • Distractors removed: Jewelry

Accuracy (the most recognized emotion of each expression was included in the final database) Study 1 175 undergraduate students
  • Age: ND

  • Gender: M=35%; F=65%

  • Race: ND Study 2 234 undergraduate students

  • Age: ND

  • Gender: M=21%; F=79%

  • Race: ND

  • Analysis of the items: Accuracy§§

  • Validity evidence: Content-based: Accuracy and performance based on race and gender of stimulus

Vaiman et al. (2017) 54 FACS Emotions elicited from specific situations
  • Background: Blue

  • Clothes: White T-shirt

  • Distractors removed: Hair back (hair up)

Images recognized with ≥70% accuracy 466 students from the Psychology School of the National University of Córdoba.
  • Age: 20.29 years; SD=4.33

  • Gender: M=23%; F=79%

  • Race: ND

  • Precision: Accuracy

  • Analysis of the items: Discrimination

  • Validity evidence: Based on the convergent relationship: Descriptive comparison of database performance vs. POFA database performance

Yang et al. (2020) 55 Tsinghua facial expression database 1) Emotions elicited from specific situations 2) Instruction on muscle movement of the emotions based on FACS
  • Background: White

  • Clothes: ND

  • Distractors removed: Tattoos, piercings, jewelry, glasses, and makeup.

Images recognized with ≥70% accuracy 34 young individuals and 31 older adults, Chinese Young individuals
  • Age: 19-35 years (23.50 years; SD=4.41)

  • Gender: M=19; F=15

  • Race: Chinese Older adults

  • Age: 58-72 years (65.06 years; SD=3.50)

  • Gender: M=13; F=18

  • Race: Chinese

  • Precision: Accuracy and kappa agreement among the evaluators

  • Validity evidence: Content-based: Accuracy and kappa agreement among the evaluators

ND: not declared; M: male; F: female; MTurk: Amazon Mechanical Turk; FACS: Facial Action Coding System (Ekman and Friesen, 1978) 65 ; ANCOVA: analysis of covariance; ANOVA: repeatedmeasure analysis of variance.

*

Only images with ≥50% accuracy were included in the final database;

Satisfactory indexes; ‡There was a significant difference in precision between the analyzed variables;

§

The mean rate of correct identification of the emotions was 62.5%; //Only images recognized by ≥15 evaluators were included in the final database;

There was no significant difference in precision between the analyzed variables;

#

The mean rate of correct identification of the emotions was 66%;

**

Only images with ≥60% accuracy were included in the final;

††

Accuracy is presented for each emotion and varied from 44 to 100%;

‡‡

Only images recognized by at least 55% of the evaluators were included in the final database. The mean recognition of the final database was 63%;

§§

The mean recognition rate of the final database varied from 47 to 94%.