Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2022 Oct 20;49(12):1173–1180. doi: 10.1111/joor.13378

Automated detection of smiles as discrete episodes

Hisham Mohammed 1, Reginald Kumar Jr 1, Hamza Bennani 2, Jamin B Halberstadt 3, Mauro Farella 1,4,
PMCID: PMC9828522  PMID: 36205621

Abstract

Background

Patients seeking restorative and orthodontic treatment expect an improvement in their smiles and oral health‐related quality of life. Nonetheless, the qualitative and quantitative characteristics of dynamic smiles are yet to be understood.

Objective

To develop, validate, and introduce open‐access software for automated analysis of smiles in terms of their frequency, genuineness, duration, and intensity.

Materials and Methods

A software script was developed using the Facial Action Coding System (FACS) and artificial intelligence to assess activations of (1) cheek raiser, a marker of smile genuineness; (2) lip corner puller, a marker of smile intensity; and (3) perioral lip muscles, a marker of lips apart. Thirty study participants were asked to view a series of amusing videos. A full‐face video was recorded using a webcam. The onset and cessation of smile episodes were identified by two examiners trained with FACS coding. A Receiver Operating Characteristic (ROC) curve was then used to assess detection accuracy and optimise thresholding. The videos of participants were then analysed off‐line to automatedly assess the features of smiles.

Results

The area under the ROC curve for smile detection was 0.94, with a sensitivity of 82.9% and a specificity of 89.7%. The software correctly identified 90.0% of smile episodes. While watching the amusing videos, study participants smiled 1.6 (±0.8) times per minute.

Conclusions

Features of smiles such as frequency, duration, genuineness, and intensity can be automatedly assessed with an acceptable level of accuracy. The software can be used to investigate the impact of oral conditions and their rehabilitation on smiles.

Keywords: orthodontics, smiling, validation studies

1. INTRODUCTION

Smiling is a spontaneous facial expression occurring throughout everyday life, which varies largely between individuals. 1 While the interpretation of smiling may appear straightforward, it is actually one of the most complex facial expressions, and can be ambiguous. 2 Not only can smiles have different forms and meanings, but they are also found in different situations and as a consequence of different eliciting factors. 3 , 4

Smile analysis in dentistry has largely focused on static images. 5 Nonetheless, more recently, there has been a paradigm shift in treatment planning and smile rehabilitation from using static smiles to dynamic smiles; herein lies the ‘art of the smile’. 6 As the pursuit for better dentofacial aesthetics increases, it is essential to distinguish between posed and spontaneous smiles, differences between which are significant and can influence treatment planning and smile design. 5 Understanding the characteristics of different smiles and the associated age‐related changes in orofacial musculature, for example, is important to the decision‐making process to achieve ‘ideal’ tooth display. 7 However, this process should not be confined to the aesthetic elements alone but should also extend to understand whether an oral rehabilitation treatment, including orthodontics, actually affects the number and the way a patient smiles. 8 , 9

Smiling that depicts situations of spontaneous pure enjoyment or laughter are often referred to as the genuine ‘Duchenne’ smiles, to acknowledge the scientist who first described their features. 10 , 11 The Duchenne smile prompts a combined activation of the zygomaticus major and the orbicularis oculi muscles. This pattern of muscular activity distinguishes between genuine smiles and ‘social’ smiles, which are generally expressed during conditions of non‐enjoyment. 12 , 13 The identification of Duchenne smiles relies on subtle analysis of facial expressions. 14

The Facial Action Coding System (FACS) 15 is a popular and reliable method for detecting and quantifying the frequency of facial expressions from full‐face video recordings. 16 The FACS uses action units (AUs), which code for actions of individual or groups of muscles during facial expression. 15 The activation level of each AU is scored using intensity scores, ranging from ‘trace’ to ‘maximum’. According to FACS, the onset of a smile can be identified when the activation of the zygomaticus major displays traces of raised skin within the lower‐to‐middle nasolabial area and other traces of upwardly angled and elongated lip corners. 15 These muscle activities would increase in intensity until the smile apex is reached before reverting until no further traces of activation of the zygomaticus major could be recognised; hence, the smile offset is denoted. 15 The introduction of FACS has undoubtedly challenged the study of facial expressions as it allows real‐time assessment of emotions; however, its utilisation for manual detection and coding of AUs presents with limitations; (a) the need for experienced coders who are able to accurately identify on a frame‐wise basis the onset, apex, and offset of a smile, 16 (b) the coding process is extremely laborious, posing a huge challenge in large‐scale research, (c) susceptibility to observer biases 17 and high costs. 18 The observable limitations encountered with manual analyses of smiles has led to computing developments to automatedly detect dynamic smiling features. 19

FACS focuses primarily on the identification of active target AUs frame‐by‐frame and do not include comprehensive analyses of smiling as discrete episodes, so that their individual features and patterns can be characterised. An episode‐wise analysis of individual smiles would allow researchers to address questions such as how often, how long, how strong, and how genuinely do individuals smile under different experimental and/or situational factors, and what is the impact of factors such as, oral health‐related conditions, on the way people smile. This would also pave the way to understand the dynamic characteristics of smiles in oral rehabilitation patients 8 and assist in areas where smile rehabilitation through individualised muscle mimicry and training is demanded. 20

The aim of this study is to develop and validate a user‐friendly software script, based on well‐established pattern‐recognition algorithms for tracking facial landmarks and facial AUs, so that discrete smile episodes can be analysed off‐line from full‐face videos and quantified in terms of frequency, duration, authenticity, and intensity of smile.

2. MATERIALS AND METHODS

The study included two phases. During the first phase, a software script was developed with the help of a computer scientist (HB) and extensively tested with ongoing feedback from a focus group represented by the authors and a few test volunteers. During the second phase, preliminary data were collected from a convenience sample of thirty study participants to optimise the performance of the algorithm for smiling detection and to identify optimal thresholds, so that the software's performance could be validated against two manual coders.

2.1. Phase I: software script

OpenFace2.2.0 was used as a platform to extract information about facial AUs, which were considered relevant for this study. 21 This is an open‐source automatic facial recognition software intended to be used by researchers interested in machine learning, affective computing, and facial behaviour analysis. 21 The software is an update of a previous version of a facial behaviour analysis toolkit, which is based on convolutional neural networks and allows automated identification of 68 facial landmarks at any frame rate. 21 , 22 , 23 The software's output includes a timestamp, quantitative information about all facial landmarks, head posture, eye gaze, activation levels of facial AUs, and three‐dimensional (3D) coordinates of individual facial landmarks, as detected in each frame of the analysed video. AUs represent individual components of muscle movements, whose activation is identified by monitoring the 3D displacement of facial landmarks, with specific sets of landmarks corresponding to each AU. The software also generates videos showing dynamic changes of identified facial landmarks, and 3D information on the gaze vector and the head posture. An example of facial landmarks identified during smiling is shown in Figure 1.

FIGURE 1.

FIGURE 1

Example frame from smiling study participants, with activation of AU6 > 0.5 and AU12 > 1.5

As this study focused on smiling, the relevant AUs were: AU6, AU12, and AU25. AU6 (‘cheek raiser’) tracks the activity of the orbicularis oculi muscle, pars orbitalis, and is generally considered a marker of smile genuineness (i.e. a Duchenne marker). AU12 (‘lip corner puller’) tracks the activity of the zygomaticus major muscle. AU25 (‘lips apart’) tracks the activity of the depressor labii inferioris muscle. The intensities of AU6 and AU12 activation were automatically coded by the software using a six‐point ordinal scale (0–5), with values amounting to 1–2 indicating weak (trace) to slight activation, a value of 3 indicating marked pronounced activity, a value of 4 indicating extreme activation, and a value of 5 indicating maximum possible intensity. 15 , 24 AU25 was assigned a dichotomous value, which was either 0, indicating lips closed without teeth showing, or 1.

A dedicated software script was developed in Java (Oracle JDK 1.8.0_111). The software had a stand‐alone user‐friendly graphic interface, which allows users to open the output file of OpenFace Software and to detect all the smiling episodes occurring throughout an entire video or within a well‐defined portion of a video, as defined by the start and end frame numbers.

To identify the onset of a smiling episode, both AU6 and AU12 had to be above the specified thresholds. The end of the smiling episode was identified by a subthreshold activation of either AU for longer than 2 s. In effect, this means that when two or more smiling episodes were separated by less than 2 s, they were merged into a single episode. The stand‐by time could be changed by the user.

For every smiling episode, the software assigned a progressive count, the onset time, the duration, and the mean activation of AU6 and AU12 across the entire episode. The onset and duration of individual episodes were given at a resolution equal to the inverse of frame rate of the video analysed.

In order to assign a clinically meaningful value to AU25, this was reported as the proportion of time teeth were shown during a given smiling episode. For example, an activity value of 50% indicated that teeth were visible during half of the episode. Additional outcome measurements included the number of smile episodes per minute and the relative smile time (%). This was calculated as the proportion of time that each individual had smiled while watching the video clip, by summing the durations of all smiling episodes and then dividing the total duration of smiling by the length of the video.

2.2. Phase 2: descriptive study and software validation

Data were collected at the Craniofacial Clinical Research Laboratory at the Faculty of Dentistry, University of Otago, under local Ethics Committee approval number H19/160. All participants enrolled in this study agreed to participate and signed a written informed consent form. The report of phase 2 conforms to the guidelines for reporting observational studies (STROBE). 25

2.3. Sample characteristics

A convenience sample of thirty participants (16 females, 14 males; age 18.9 years SD 2.2 years) were recruited as part of a larger project aiming to investigate the impact of oral health, psychological traits, and sociodemographic variables on smiling behaviour. Recruitment started in September 2020 and ended in December 2020.

Participants aged 16–22 years were recruited through local public advertisements, including university mailing lists, social media, flyers, and word of mouth. Exclusion criteria were: (a) cleft lip/ palate or other craniofacial syndromes; (b) severe periodontitis affecting front teeth; (c) history of major psychiatric disorders; (d) Bell's palsy; (e) removable dentures; (f) enamel dysplasia or severe stains affecting front teeth; (g) history of dysmorphophobia. Wearing eyeglasses was not set as an exclusion criterion; however, only one participant requested to wear glasses while watching the videos, and this apparently did not interfere with landmark identification. The sample investigated in this study represented a randomly selected subset of around a hundred study participants. The large sample exhibited a variety of occlusal conditions and is part of another related investigation examining the influence of malocclusion on smiling features through the proposed approach.

The occlusal characteristics of study participants were assessed using the Dental Aesthetic Index (DAI). DAI is a popular tool used in epidemiology to assess a specific set of occlusal traits, such as missing anterior teeth, crowding and spacing in the incisal region, midline diastema, overjet, anterior open bite, incisor irregularity, and molar relationship. 26 The overall DAI assessment scores of the weighted components are summed with a constant of 13 to produce the final DAI aggregate.

2.4. Experimental Setup

An Ultra High‐Definition web camera (Logitech BRIO 4K Ultra High‐Definition Webcam), with resolution set to 4096 × 2160 pixels and frame rate set at 30 frames per second was secured atop a 27‐inch Dell Ultrasharp U2715H computer monitor with a resolution of 2560 × 1440 pixels used to showcase a video clip.

Each participant was seated 60–70 cm away from the display monitor. The height of the monitor was adjusted so that the participant's eyes were aligned at a point corresponding to the middle of the screen when the participant's head was in natural head position.

Face lighting was individually optimised by a ring light (APEXEL 10″ 26 cm LED Selfie Circle Ring, Apexel), which was also secured on the back of the screen. A neutral background was used to avoid light reflections and object interferences, which could affect off‐line analyses of the video. The room light was switched off during the entire recording.

2.5. Smile triggering video

Three amusing video clips were identified via a small pilot study by the focus group previously described. The first clip showed an episode of Mr Bean (Mr Bean Rides Again, Act 5: The Flight; 3 min), whose character is widely used as a trigger stimulus in smile research. 27 The next two clips included a non‐stop laughter of a cute baby (47 s) and Juan Joya Borja's viral laughing video widely known as the ‘Spanish laughing guy’ in a televised episode of Ratones Coloraos, which first aired in 2001 but went viral in 2007 (46 s). 28 The three clips were separated by fade‐outs and merged into a single video, 4 min and 33 s in length (4 min and 24 s without transitions).

Following the amusing video clips, the video presented instructions for completing a series of tasks, with time kept by a countdown timer and progress bar. The tasks involved initiating a series of jaw movements and facial expressions that could confound identification of smiles: speaking (counting 1–10), yawning, coughing, mouth covering; and posing anger, sadness, fear, surprise, disgust, smiling, and neutral expressions. The speaking task lasted 10 s while all other tasks lasted six seconds, with a six‐second inter‐task interval. All tasks were administered once, except for smiling, which was repeated three times. These tasks allowed a precise tuning of the machine learning models that were applied to detect smiling episodes in the video and individual‐specific calibration of the algorithm.

2.6. Procedure

Each participant's involvement in the study took place in a single session. At the start of this session, each participant was checked against the inclusion/exclusion criteria, and the occlusal characteristics were scored using the DAI index. The participants were then given an overview of the research project and signed the written consents for participation. To elicit natural responses and trigger spontaneous smiling reactions during the video recording, the participants were not told that the main outcomes of the study were the features of their smiles. Afterwards, each participant was left alone in the recording room and was requested to view the video clip and then perform the follow‐up tasks.

After viewing the video, each participant was asked to fill in two questionnaires. The first was a 12‐item Smile Aesthetics‐Related Quality of Life (SERQoL) questionnaire relating to three dimensions of the psychosocial impact of smiles. 29 The second was the 60‐item IPIP–NEO–60 personality scale. 30 The results of these questionnaires were the subject of another investigation and are not analysed in this report. Each participant was given a $20 voucher as reimbursement for participation in this project.

2.7. Data analysis and statistics

The full‐face videos were reviewed and coded frame‐wise by two examiners (HM and RK), who were instructed to identify each distinct smiling episode (i.e. preceded and followed by a smile‐free period of at least two seconds) in each study participant. The frames corresponding to the onset and cessation of each smiling episode were identified and noted between the two coders who viewed the full‐face videos within the same setting until a consensus between the two coders was reached. When consensus was not reached, a third coder (MF) was consulted.

The validity of the smiling detection software was assessed by calculating receiver operating characteristics (ROC) curves, using the examiner‐coded smiles as a reference standard and classification variable. ROC curves were assessed frame‐by‐frame timewise for each smile and smile‐free portions. Sensitivity (Se = true positive rate) and specificity (Sp = true negative rate) were calculated frame‐wise and maximised using Youden index (Se +Sp −1). The ROC curve was plotted by false positive rate (1‐Sp) on the x‐axis and the true positive rate (Se) on the y‐axis. The ROC curve was plotted by varying two different thresholds, the first one (Th1) representing the activation level of the cheek raiser muscle (AU6) and the second one (Th2) representing the activation level of the lip corner puller muscle (AU12). The two thresholds Th1 and Th2 were varied stepwise using a 0.05 step for both thresholds. The area under the curve (AUC) of the ROC curve and the overall accuracy of the test (sum of true positive frames + true negative frames/total frames) were also calculated.

After threshold optimisation, the software script was run on the entire recording (including the post‐video tasks) to identify smiling episodes and to investigate possible misclassifications (false positives) of confounding tasks as smiles. The three smiling tasks, part of the second section of the video, were excluded from the confounds analysis.

To obtain an estimate of smile genuineness (0–5), intensity (0–5), and teeth exposure (%), the amount of activation of AU6, AU12, and AU25 were averaged across each episode. The outcome variables considered in this study were the number of smiling episodes per session, the mean and cumulative duration of smiling episodes, and the mean activation of AU6, AU12, and AU25.

All the data were analysed in Excel (Version 16.51, Microsoft Corporation) and SPSS (version 20.0 IBM Corporation).

3. RESULTS

Study participants were young adults, mostly Caucasian (>80%), about half of them were females, and had a broad range of malocclusions (Table 1).

TABLE 1.

Demographic characteristics of the participants

Variable
Age in years [Mean (SD)] 18.9 (2.2)
Gender [n (%)]
Female 16 (53.3)
Male 14 (46.7)
Ethnicity [n (%)]
Caucasian 25 (83.5)
Pacific Islander 2 (6.5)
Other 3 (10.0)
DAI scores [Mean (SD)] 30.7 (9.6)

The distinct smile episodes that were manually identified frame‐wise by coders were used to build a ROC curve (Figure 2). The area under the curve of the ROC curve was 0.94, and the overall accuracy of smiling frames detection was 84.5%. The maximisation of Youden index indicated that detection accuracy was highest with thresholds of 0.5 for AU6 and 1.5 for AU12. These thresholds resulted in a sensitivity of 82.9% and a specificity of 89.7% and were used in subsequent episode‐wise analysis in the study sample.

FIGURE 2.

FIGURE 2

ROC curve based on two thresholds for AU6 and AU12 and frame‐wise detection of smiles

After calibration of the algorithm, the true‐positive detection of individual smiling episodes was 90.0%. In addition, 11.3% of confounder tasks were detected as false positives. The tasks more often misclassified as smiles were mouth covering, which amounted to around one‐third of the false detections and yawning, which amounted to around 20% of the false positive detections.

Study participants smiled approximately seven times according to the classifier, with each smile episode lasting approximately 10 s, or about one‐third of the duration of the humorous videos. The features of smiling episodes showed a large inter‐individual variation in the frequency, intensity, and duration of smiles (Figure 3).

FIGURE 3.

FIGURE 3

Three‐dimensional histogram depicting all the smiling episodes detected by duration and intensity of AU12.

Descriptive statistics for the individual features of smiles, such as activation of specific AUs are given in Table 2. Activation of AU12, which is the main AU of the smile, 15 ranged from slight to pronounced, with intensity ranking. Some participants hardly showed teeth on smiling, while others showed teeth throughout the entire smile episode.

TABLE 2.

Descriptive statistics for the features of smiling episodes detected from the 30 study participants, while watching the video footage

Mean Standard Deviation Minimum Maximum
Number of episodes per minute 1.6 0.8 0.2 3.1
Mean Duration of episode (s) 11.3 5.6 2.3 19.2
Relative Smile Time (%) 34.0 23.5 1.1 80.6
Genuineness (AU6; 0–5) 1.3 0.4 0.4 2.1
Intensity (AU12; 0–5) 2.2 0.4 1.6 3.0
Tooth show (AU25; %) 47.2 27.4 0.8 100

4. DISCUSSION

This paper presents a user‐friendly automated software script, which can detect and quantify smiles features in terms of their: (1) frequency, (2) onset and offset of each episode and overall episode duration, (3) peak intensities, (4) percentage of tooth display in each smiling episode and (5) smiling genuineness. In order to identify spontaneous smiles, we have introduced two detection thresholds based on the levels of activation of the cheek raiser (AU6, a Duchenne marker) and the lip corner puller (AU12). The findings indicate an acceptable detection accuracy of the proposed method, which can be used for an episode‐wise analysis of smiles. The software does not require calibration and is available upon request.

To the best of our knowledge, this is the first study that has used available libraries and FACS to present an automated episode‐wise analysis of smile episodes. This script uses a user‐friendly interface, which needs to be used in unison with OpenFace2.20 (an open‐source script), which any researcher can utilise. Moreover, the introduction of Artificial Intelligence (AI) to the field of dynamic smile analyses is fundamental. Hence, observer‐related biases are minimised with an expected reduction in time and labour associated with manual analyses. 31

The measure of diagnostic accuracy includes both sensitivity and specificity values. 32 In our study, 82.9% sensitivity value demonstrates a high proportion of detected true smile episodes. In addition, the diagnostic specificity is confined to 89.7%, as presented in the ROC curve plot. In summation, both values align well with stipulated expectations in the area of automated facial expression recognition and dynamic analysis of human emotions. 33 , 34 On the other hand, descriptive values from automated analysis of sample clips showed that participants smiled around two times per minute, on average for around 11 s per episode. Also, the mean intensity of the zygomaticus major activation (AU12) was 2.2 ± 0.4. These findings align with previous research of participants who viewed a funny clip with a mean duration of AU12 activation (13.8 ± 12.7 s) and a maximum intensity of 1.8 ± 1.1. 35 Further, a recent study reported that the mean intensity of AU12 was 4.1 during genuine smiling and 3.9 for posed smiles. 36 Though these AU12 intensity values seem comparable during both genuine and posed smiles, previous research pointed out to recognisable differences with respect to AU6 activity during genuine and posed smiles placing an argument that it would be difficult to deliberately fake a genuine smile. 10 , 37 However, there is also some evidence suggesting that Duchenne smiles are merely traces of smile intensity rather than a reliable and distinct indicator of smile authenticity. 38 Such discrepancies in the reported findings may be ascribed to the different methods to trigger and to measure smiles, the social context, and sociodemographic characteristics of the sample, which may all influence the features of smiling. 39 However, they still do impose limitations on our understanding, recognition and differentiation of genuine and posed smiles.

Smiling is an expression that can be triggered on demand, as well as spontaneously within a social context. In phase 2, the trigger video had successfully elicited smiles in individuals regardless of their ethnic background, age, or malocclusion. This suggests that individuals are prone to smiling when a suitable trigger is used aside from circumstances and situations where social integration is seen. 40 In addition, while the participants were informed about the video recording process, it can be argued that masking the purpose of the video recording being the assessment of smiling episodes succeeded in rendering a natural response to the trigger (i.e. spontaneous smiling) as observation awareness is well‐known to be an important variable in smiling research. 39

It is possible to detect various spontaneous facial expressions expressed in unscripted social context with automated recognition systems. 41 However, establishing reliable automated coding of discrete facial expressions is a challenging process. 41 Detection issues often arise when multiple AUs are involved; hence, recognising compound facial expressions where individuals combine various expressions, is daunting. 42 In addition, dynamic tracking and head orientation also pose obstacles to AI recognition. 43 In our study, we incorporated post‐video scripts of different plausible confounders to tackle these issues. These tasks included: counting numbers, yawning, coughing, mouth covering; and posing anger, sadness, fear, surprise, disgust, and neutral expressions. 44 Our findings show that only 11.3% of the tasks were identified as false positives after calibration of the algorithm. Misclassified smiles were mostly associated with mouth covering and yawning. In turn, the proposed method for episode‐wise detection of smiles could serve as a cornerstone to better landmark feature extraction as well as recognition detection in future research. 45

The present study has some limitations, which should be emphasised. Most obviously, the software was validated on a relatively small convenience sample of mostly Caucasian adolescents and young adults who spontaneously smiled. Further research is needed to investigate the quantitative features of posed and spontaneous smiles and validation of the proposed method in other samples, which have not been analysed in the present study. Also, research is needed to determine the classifiers' accuracy in larger and more heterogeneous sample to investigate the effect of ethnicity, sex, age and other demographic characteristics on smiling and to increase the external validity of our findings. 46 In addition, it is important to note that the yielded accuracy was not perfect, though an AUC value closer to 1 (0.94 in our report) is viewed as very high in terms of the discrimination performance of the software. 47 Further enhancements are plausible with future improvements in AI. These enhancements should also include other methods of quantification of muscular activity to objectively assess genuine smiles using electromyography (EMG) and wearable detection devices. 48 The possible effect of calibration on smile detection accuracy also needs to be further investigated in different samples and in contrast with other smile detection methods to establish which is better in terms of accuracy, cost, labour, and overall handling.

In summary, this paper presents a novel automated episode‐wise quantitative assessment of smiling dynamics. The software can provide a quantitative analysis of the frequency, duration, intensity, and characteristics of smiles through a user‐friendly interface that is available for further utilisation in smile research. The proposed approach has potential different applications within dentistry. Firstly, it would offer researchers the opportunity to understand to what extent oral conditions, such as dental, oral, and facial anomalies objectively influence smiles. This could then translate into investigations of the effect of corrective treatment through oral rehabilitation, dental and orthodontic treatment, on specific features of smiles. Furthermore, the proposed approach may be used to trigger and investigate spontaneous smiles, thus facilitating restorative treatment plans, as compared to information depicted via static posed photographs.

In addition, the capability of the algorithm to detect and analyse observable data of individuals smiling under controlled conditions opens the door to addressing further challenges in other areas. For example, future research could target understanding the dynamics of smiling in real‐time conditions. Moreover, the dental literature is replete with research on the static features of smiles, while data are scarcely available on the dynamic features. It would be interesting and important to examine the relationship between malocclusion patterns, different orthodontic treatment modalities, and their effect on smiling from a dynamic standpoint. Based on the aforementioned points, future developments, and further implementation of the pattern‐recognition algorithm could be significant not only in dental disciplines, but also in psychology, sociology, and behavioural research.

5. CONCLUSIONS

Individual smile episodes and their quantitative features, such as frequency, duration, genuineness, and intensity can be automatedly assessed with an acceptable level of accuracy. The proposed approach can be used to investigate the impact of oral health and oral rehabilitation on smiles.

AUTHOR CONTRIBUTIONS

Hisham Mohammed and Reginald Kumar Jr involved in conceptualisation, investigation, validation and writing the original draft. Hamza Bennani involved in software and analysis. Jamin Halberstadt involved in methodology, reviewing and editing and supervision. Mauro Farella involved in conceptualisation, methodology, reviewing and editing, analysis and supervision.

CONFLICT OF INTEREST

All authors declare that they have no competing interest.

PEER REVIEW

The peer review history for this article is available at https://publons.com/publon/10.1111/joor.13378.

ACKNOWLEDGEMENT

This research was funded by a research grant from Colgate‐Palmolive and internal funding from the Sir John Walsh Research Institute, Faculty of Dentistry, University of Otago, New Zealand. Open access publishing facilitated by University of Otago, as part of the Wiley ‐ University of Otago agreement via the Council of Australian University Librarians.

Mohammed H, Kumar R Jr, Bennani H , Halberstadt JB, Farella M. Automated detection of smiles as discrete episodes. J Oral Rehabil. 2022;49:1173‐1180. doi: 10.1111/joor.13378

Hisham Mohammed and Reginald Kumar Jr equally contributing to the manuscript.

DATA AVAILABILITY STATEMENT

The data would be available upon reasonable request.

REFERENCES

  • 1. Ambadar Z, Cohn JF, Reed LI. All smiles are not created equal: morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. J Nonverbal Behav. 2009;33(1):17‐34. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Hess U, Beaupré MG, Cheung N. Who to whom and why–cultural differences and similarities in the function of smiles. An Empirical Reflection on the Smile. 2002;4:187. [Google Scholar]
  • 3. Krumhuber EG, Manstead AS. Can Duchenne smiles be feigned? New evidence on felt and false smiles. Emotion. 2009;9(6):807‐820. [DOI] [PubMed] [Google Scholar]
  • 4. Ekman P, Friesen WV. Felt, false, and miserable smiles. J Nonverbal Behav. 1982;6(4):238‐252. [Google Scholar]
  • 5. Mahn E, Sampaio CS, da Silva BP, et al. Comparing the use of static versus dynamic images to evaluate a smile. J Prosthet Dent. 2020;123(5):739‐746. [DOI] [PubMed] [Google Scholar]
  • 6. Sarver DM, Ackerman MB. Dynamic smile visualization and quantification: part 1. Evolution of the concept and dynamic records for smile capture. Am J Orthod Dentofacial Orthop. 2003;124(1):4‐12. [DOI] [PubMed] [Google Scholar]
  • 7. Van der Geld P, Oosterveld P, Kuijpers‐Jagtman AM. Age‐related changes of the dental aesthetic zone at rest and during spontaneous smiling and speech. Eur J Orthodontics. 2008;30(4):366‐373. [DOI] [PubMed] [Google Scholar]
  • 8. Khan M, Kazmi SMR, Khan FR, Samejo I. Analysis of different characteristics of smile. BDJ Open. 2020;6(1):1‐5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Ackerman MB, Brensinger C, Landis JR. An evaluation of dynamic lip‐tooth characteristics during speech and smile in adolescents. Angle Orthod. 2004;74(1):43‐50. [DOI] [PubMed] [Google Scholar]
  • 10. Gunnery SD, Hall JA, Ruben MA. The deliberate Duchenne smile: individual differences in expressive control. J Nonverbal Behav. 2013;37(1):29‐41. [Google Scholar]
  • 11. Ekman P, Davidson RJ, Friesen WV. The Duchenne smile: emotional expression and brain physiology: II. J Pers Soc Psychol. 1990;58(2):342‐353. [PubMed] [Google Scholar]
  • 12. Ekman P. The argument and evidence about universals in facial expressions. Handbook of Social Psychophysiology. John Wiley & Sons; 1989:143‐164. [Google Scholar]
  • 13. Duchenne de Boulogne GB. The Mechanism of Human Facial Expression. Cambridge University Press; 1990. (Original publication 1862). [Google Scholar]
  • 14. Krumhuber E, Kappas A. Moving smiles: the role of dynamic components for the perception of the genuineness of smiles. J Nonverbal Behav. 2005;29(1):3‐24. [Google Scholar]
  • 15. Ekman P, Friesen WV, Hager J. Facial Action Coding System: Manual. Consulting Psychologists Press; 1978. [Google Scholar]
  • 16. Cohn JF, Ambadar Z, Ekman P. Observer‐based measurement of facial expression with the facial action coding system. The Handbook of Emotion Elicitation and Assessment. Oxford University Press; 2007:203‐221. [Google Scholar]
  • 17. Messinger DS, Mahoor MH, Chow SM, Cohn JF. Automated measurement of facial expression in infant–mother interaction: a pilot study. Inf Dent. 2009;14(3):285‐305. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. McDuff D, Kodra E, Re K, LaFrance M. A large‐scale analysis of sex differences in facial expressions. PloS One. 2017;12(4):1‐11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Chen J, Ou Q, Chi Z, Fu H. Smile detection in the wild with deep convolutional neural networks. Mach Vision Appl. 2017;28(1):173‐183. [Google Scholar]
  • 20. Angela I, Lin C, Braun T, McNamara JA Jr, Gerstner GE. Esthetic evaluation of dynamic smiles with attention to facial muscle activity. Am J Orthod Dentofacial Orthop. 2013;143(6):819‐827. [DOI] [PubMed] [Google Scholar]
  • 21. Baltrusaitis T, Zadeh A, Lim YC, Morency LP. Openface 2.0: facial behavior analysis toolkit. 13th IEEE International Conference on Automatic Face & Gesture Recognition; 2018.
  • 22. Baltrusaitis T, Robinson P, Morency LP. Constrained local neural fields for robust facial landmark detection in the wild. IEEE International Conference on Computer Vision Workshops. 2013:354‐361. doi: 10.1109/ICCVW.2013.54 [DOI] [Google Scholar]
  • 23. Baltrušaitis T, Robinson P, Morency LP. Openface: an open source facial behavior analysis toolkit. IEEE Winter Conference on Applications of Computer Vision (WACV); 2016.
  • 24. Mahoor MH, Cadavid S, Messinger DS, Cohn JF. A framework for automated measurement of the intensity of non‐posed facial action units. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; 2009.
  • 25. Vandenbroucke JP, Von Elm E, Altman DG, et al. Strengthening the reporting of observational studies in epidemiology (STROBE): explanation and elaboration. PLoS Med. 2007;4(10):e297. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Cons NC, Kohout FJ, Jenny J. DAI–The Dental Aesthetic Index. College of Dentistry, University of Iowa; 1986. [Google Scholar]
  • 27. Juckel G, Mergl R, Brüne M, et al. Is evaluation of humorous stimuli associated with frontal cortex morphology? A pilot study using facial micro‐movement analysis and MRI. Cortex. 2011;47(5):569‐574. [DOI] [PubMed] [Google Scholar]
  • 28. Borja JJ. Ratones Coloraos [Internet], Spain; 2001. https://www.youtube.com/watch?v=GDLBaHjy9Ho. Accessed October 13, 2022.
  • 29. Saltovic E, Lajnert V, Saltovic S, Kovacevic Pavicic D, Pavlic A, Spalj S. Development and validation of a new condition‐specific instrument for evaluation of smile esthetics‐related quality of life. J Esthet Restor Dent. 2018;30(2):160‐167. [DOI] [PubMed] [Google Scholar]
  • 30. Maples‐Keller JL, Williamson RL, Sleep CE, Carter NT, Campbell WK, Miller JD. Using item response theory to develop a 60‐item representation of the NEO PI–R using the international personality item Pool: development of the IPIP–NEO–60. J Pers Assess. 2019;101(1):4‐15. [DOI] [PubMed] [Google Scholar]
  • 31. Tripepi G, Jager K, Dekker F, Wanner C, Zoccali C. Bias in clinical research. Kidney Int. 2008;73(2):148‐153. [DOI] [PubMed] [Google Scholar]
  • 32. Li J, Fine J. On sample size for sensitivity and specificity in prospective diagnostic accuracy studies. Stat Med. 2004;23(16):2537‐2550. [DOI] [PubMed] [Google Scholar]
  • 33. Calvo MG, Fernández‐Martín A, Recio G, Lundqvist D. Human observers and automated assessment of dynamic emotional facial expressions: KDEF‐dyn database validation. Front Psychol. 2018;9:2052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Ryan A, Cohn JF, Lucey S, et al. Automated facial expression recognition system. 43rd Annual International Carnahan Conference on Security Technology; 2009.
  • 35. Jakobs E, Manstead AS, Fischer AH. Social motives and emotional feelings as determinants of facial displays: the case of smiling. Pers Soc Psychol Bull. 1999;25(4):424‐435. [Google Scholar]
  • 36. Ruan QN, Liang J, Hong JY, Yan WJ. Focusing on mouth movement to improve genuine smile recognition. Front Psychol. 2020;11:1126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Ekman P. Darwin, deception, and facial expression. Ann NY Acad Sci. 2003;1000(1):205‐221. [DOI] [PubMed] [Google Scholar]
  • 38. Girard JM, Shandar G, Liu Z, Cohn JF, Yin L, Morency LP. Reconsidering the Duchenne smile: indicator of positive emotion or artifact of smile intensity? 8th International Conference on Affective Computing and Intelligent Interaction (ACII); 2019. [DOI] [PMC free article] [PubMed]
  • 39. LaFrance M, Hecht MA, Paluck EL. The contingent smile: a meta‐analysis of sex differences in smiling. Psychol Bull. 2003;129(2):305‐334. [DOI] [PubMed] [Google Scholar]
  • 40. Papa A, Bonanno GA. Smiling in the face of adversity: the interpersonal and intrapersonal functions of smiling. Emotion. 2008;8(1):1‐12. [DOI] [PubMed] [Google Scholar]
  • 41. Girard JM, Cohn JF, Jeni LA, Sayette MA, De la Torre F. Spontaneous facial expression in unscripted social interactions can be measured automatically. Behav Res Methods. 2015;47(4):1136‐1147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Anderson K, McOwan PW. A real‐time automated system for the recognition of human facial expressions. IEEE Trans Syst Man Cybern B Cybern. 2006;36(1):96‐105. [DOI] [PubMed] [Google Scholar]
  • 43. Coan J. Handbook of Emotion Elicitation and Assessment. Oxford University Press; 2007. [Google Scholar]
  • 44. Provine RR. Beyond the smile. The Science of Facial Expression. Oxford University Press; 2017:197‐216. [Google Scholar]
  • 45. Cui D, Huang GB, Liu T. ELM based smile detection using distance vector. Pattern Recognit. 2018;79:356‐369. [Google Scholar]
  • 46. Houstis O, Kiliaridis S. Gender and age differences in facial expressions. Eur J Orthodontics. 2009;31(5):459‐466. [DOI] [PubMed] [Google Scholar]
  • 47. Park SH, Han K. Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. Radiology. 2018;286(3):800‐809. [DOI] [PubMed] [Google Scholar]
  • 48. Inzelberg L, David‐Pur M, Gur E, Hanein Y. Multi‐channel electromyography‐based mapping of spontaneous smiles. J Neural Eng. 2020;17(2):026025. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data would be available upon reasonable request.


Articles from Journal of Oral Rehabilitation are provided here courtesy of Wiley

RESOURCES