The Temporal Dynamics of Facial Expressions

Share

evaEva Krumhuber, Division of Psychology and Language Sciences, University College London

Or Why Martin Scorsese Didn’t Look Happy During the 2003 Oscars Ceremony

October 2016 – Join me in exploring one of the most extraordinary aspects of human life: the ability to express emotions. As a social species, we have developed rich capacities to interact with each other. From our earliest age, social relationships dominate our lives and contribute to making us who we are. Ingrained with the need for social connection (Baumeister & Leary, 1995), emotions help us to communicate with others, judge their intentions and negotiate our behavioral responses. What seems like an effortless skill involves a range of complex and relational processes (Kappas & Descóteaux, 2003). A particularly intricate aspect of human communication is its dynamic nature: rather than being like a single snapshot, emotional expressions change over time in response to information dynamically gleaned from the environment. As we cannot press a ‘Pause’ button to capture another’s emotional expression in a vacuum, we need to learn how to interpret expressions in their changing context.

Unfortunately, this temporal quality of facial displays has been frequently overlooked in emotion research, most of which has relied on the use of still images or photographs. Such stimuli often consist of actors portraying high-intensity emotions based on pre-defined and stereotypical patterns of facial actions (i.e., Pictures of Facial Affect by Ekman & Friesen, 1976; NimStim Set of Facial Expressions by Tottenham et al., 2009; Karolinska Directed Emotional Faces by Goeleven, de Raedt, Leyman, & Verschuere, 2008). These commonly include the basic six emotions of happiness, anger, fear, sadness, disgust and surprise. Whilst static prototypes are well recognized due to their simplified and amplified nature, they hardly ever occur in real life. In addition, they impose a standard where correct emotion classification consists of the assignment of emotion category labels as intended by the researcher. This approach targets mainly questions about what the facial expression is supposed to portray, rather than what we really think that the person felt at the moment of the expression.

The latter aspect is particularly intriguing. Is it possible to ‘recognize’ an expression as the prototypical exemplar of a particular emotion, and yet have the impression that the person is feeling something different? Take the smile which is undoubtedly classified universally as a happy expression (Russell, 1994). Yet we all know that not all smiles are happy; they can be put on in the absence of any such positive emotion (i.e., politeness, appeasement) or to conceal negative feelings/motives (i.e., embarrassment, contempt, dominance; Ambadar, Cohn, & Reed, 2009; Niedenthal, Mermillod, Maringer, & & Hess, 2010).

In order to distinguish between so-called ‘genuinely felt’ and ‘unfelt/false’ smiles (see Fridlund, 1994, for a different conceptualisation), Ekman and Friesen (1982) suggested several behavioral indicators. The most well-known of these is the Duchenne marker, which is indicated by a crinkling of the skin around the eyes and is said to accompany or constitute a genuine, happiness-expressing smile (Ekman, Davidson, & Friesen, 1990). Much previous work has shown that we tend to perceive such a smile as expressing a felt positive emotion (e.g., Frank, Ekman, & Friesen, 1993; Krumhuber, Likowski, & Weyers, for a meta-analysis see Gunnery & Ruben, 2016).

In my own research, I wanted to find out whether this impression could change when dynamic information is added. To illustrate this idea, an example from Hollywood might prove valuable. At the Oscars ceremony in 2003, one of the favorites for the award of best director was Martin Scorsese for his epic drama ‘Gangs of New York’. The film was nominated in 10 categories, making it likely that Scorsese would walk home with several golden statues. When Roman Polanski was announced as the winner of the award for best director, Scorsese’s response was that of fierce disappointment, up until the moment when he realized that he was on camera. If you watch the video, you can see his expression change within milliseconds of the announcement to that of a happy smile (see 1m 06s).

One could debate whether his smile is a Duchenne smile (see Gunnery & Hall, 2014; Krumhuber & Manstead, 2009, for evidence questioning the reliability of the Duchenne marker as a true indicator of happiness). Martin Scorsese’s motivations here are obviously complex. Based on the speed with which his smile unfolds, however, it seems improbable that happiness would have been high on his list of priorities! It simply looks too quick to reflect genuinely felt enjoyment of his rival’s success.

In an attempt to test whether the perception of Martin’s dynamic smile relates to a more general phenomenon, I created video-clips of smile expressions that systematically differed in the duration with which they unfolded (onset), peaked at their apex, or returned from peak to neutral (offset; based on Ekman & Friesen, 1982). When I showed them to people and asked them to rate how genuine they thought the smiles were, their ratings systematically varied with the dynamic trajectory of the smiles. Specifically, smiles that unfolded quickly (that is, those with a short onset) and disappeared abruptly (those with a short offset) made the target’s smile appear less authentic and less believable (Krumhuber & Kappas, 2005). These ‘dynamic fake’ smiles also led people to assign lower ratings of trustworthiness, attractiveness, and flirtatiousness to the smiling person (Krumhuber, Manstead, & Kappas, 2007).

In subsequent work, I found that dynamic information affects people’s decisions and behavioral intentions. For example, using a simulated job interview scenario, interviewees displaying smiles with long onset and offset durations were evaluated higher on job-related traits such as competence and motivation, and judged to be more likely to be short-listed and selected for a job (Krumhuber, Manstead, Cosker, Marshall, & Rosin, 2009). Furthermore, people were more likely to trust and cooperate with others in economic games when they displayed such dynamic authentic smiles rather than neutral expressions or dynamic fake smiles (Krumhuber, Manstead, Cosker, Marshall, Rosin, & Kappas, 2007).

From these findings two conclusions can be drawn: First, people go beyond what is stereotypically recognized or thought of as an emotion. This suggests that the perception of visual properties of an expression (e.g. smiling mouth, Duchenne marker) is distinct from the perception of affect-specific information about what the sender really feels. Hence, emotion recognition (i.e. accurate classification) does not equal emotion interpretation; a crucial distinction that is missing in Basic Emotion Theory which assumes a direct link between expressive physical features and affective states (Calvo & Nummenmaa, 2016). Second, the dynamic quality of facial displays is used in the course of their display to discern the affective meaning of expressions (i.e. internal states of the expresser).

In line with work from other research labs over the past few years, there is much evidence to suggest that facial motion provides information over and above that contained in static emotional displays (for a review on the effects of dynamic aspects of facial expressions, see Krumhuber, Kappas, & Manstead, 2013; Krumhuber & Skora, in press). In addition to the speed with which parts of the face move, the temporal sequence of facial expressions is a critical factor in the expression of emotion. This aspect is particularly acknowledged by componential theories of emotions (Smith & Scott, 1997) which regard individual elements of expressions as emerging dynamic properties. Studies that have applied a fine-grained behavioral analysis to the time course of expressions (using the Facial Action Coding System, Ekman, Friesen, & Hager, 2002) suggest that facial actions indeed unfold sequentially and converge toward the apex in an asynchronous manner (Fiorentini, Schmidt, & Viviani, 2012; Krumhuber & Scherer, 2011; With & Kaiser, 2011). Furthermore, such sequential temporal patterns shape emotion judgements made by observers (Jack, Garrod, & Schyns, 2014; Krumhuber & Scherer, 2016). There is more work to be done in this area, much of which will be achieved using machine recognition to extract the dynamic structure of facial expressions (Pantic, 2009).

For the foreseeable future, facial motion promises to remain a topic of scientific interest. With computing entering the social domain, it is possible to communicate not only with other human beings, but increasingly artificial entities. These can be computer-animated characters (e.g. Lara Croft, Shodan, Master Chief), robots (e.g. Ishiguro’s Geminoid, Hanson’s Einstein, Breazeal’s Kismet) or even virtual agents (e.g. Pelachaud’s Greta, ICT’s SimCoach). Whether embodied or digital, their appearance and life-like demeanor is becoming more and more realistic (Küster, Krumhuber, & Kappas, 2014). To gain users‘ acceptance, our responses are likely to be guided by the relevant social cues they produce. Facial expressions are an essential ingredient in revealing a character’s personality, thoughts, and feelings. However, only if they appear convincing and authentic will we feel comfortable in interaction.

From previous research, we know that people’s affinity to artificial entities may not linearly increase with the degree of human-likeness (MacDorman, Green, Ho, & Koch, 2009). That is, more humanlike appearance does not always lead to more favorable evaluations. In fact, such characters can be subject to the so-called ‘Uncanny Valley’ effect (Mori, 2012) when their appearance falls short of emulating those of actual human beings. As a result, imperfections of human appearance can become ‘uncanny’ and repulsive. Classic examples of this can be found in CGI-heavy animated films such as The Polar Express or The Final Fantasy: The Spirits Within, which many viewers find disturbing and off-putting due to their ‘creepy’ characters (e.g., Geller, 2008). This mismatch between appearance and behavior is particularly evident when movement is involved, which can violate perceptual expectations (Saygin, Chaminade, Ishiguro, Driver, & Frith, 2012). To avoid potential pitfalls, it will be important to establish how emotions should be expressed if we are to create emotionally-appealing and usable systems. The temporal features of facial expressions can provide crucial insight into this process by allowing us to infer not only the presence of an emotional signal but also its meaning.

Are you interested in dynamic facial expression research?

Would you like to use dynamic expressions in your own research? Not sure where to find the appropriate stimuli? In a forthcoming article to be published in Emotion Review, we (Krumhuber, Skora, Küster, & Fou) provide a systematic overview of 22 publically-available databases of dynamic facial expressions and compare their conceptual and practical features, which can provide guidance as to which database to use.

If you want to create your own dynamic stimuli using easy-to-use facial animation software, you can use FACSGen 2.0 (Krumhuber, Tamarit, Roesch, & Scherer, 2012), which allows the production of static and dynamic facial expressions based on the Facial Action Coding System (FACS, Ekman, Friesen, & Hager, 2002). Unfortunately, due to ongoing licensing issues, the software is currently not publicly available. For updates on the software’s availability, please contact David Sander (david.sander@unige.ch) or Didier Grandjean (Didier.grandjean@unige.ch).

For readers with more advanced technical skills (e.g. computer scientists or engineers), the D3DFACS dataset (Cosker, Krumhuber, & Hilton, 2011) might be of interest: it is composed of 3D scans of real human faces, contains over 500 dynamic facial action sequences, and is fully FACS coded. The dataset can be used to build realistic facial animation models and is also currently used in the commercial sector by visual effects companies. For more information and in order to obtain access, please contact http://www.cs.bath.ac.uk/~dpc/D3DFACS/.

References

Baumeister, R. F., & Leary, M. R. (1995). The need to belong: Desire for interpersonal attachments as a fundamental human motivation. Psychological Bulletin, 117, 497-529.

Calvo, M. G., & Nummenmaa, L. (2016). Perceptual and affective mechanisms in facial expression recognition: An integrative review. Cognition and Emotion, 30, 1081-1106.

Cosker, D., Krumhuber, E., & Hilton, A. (2011). A FACS valid 3D dynamic action unit database with applications to 3D dynamic morphable facial modeling. In D. Metaxas, L. Quan, A. Sanfeliu, & L. Van Gool (Eds.), Proceedings of the 13th IEEE International Conference on Computer Vision (ICCV) (p. 2296-2303), Barcelona, Spain: IEEE Conference Publication

Ekman, P., Davidson, R., & Friesen, W. V. (1990). The Duchenne smile: Emotional expression and brain physiology II. Journal of Personality and Social Psychology, 58, 342-353.

Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto, CA: Consulting Psychologists Press.

Ekman, P., & Friesen, W.V. (1982). Felt, false and miserable smiles. Journal of Nonverbal Behavior, 6, 238–258.

Ekman, P., Friesen, W. V., & Hager, J. C. (2002). The Facial Action Coding System (2nd ed.). Salt Lake City, UT: Research Nexus eBook.

Fiorentini, C., Schmidt, S., & Viviani, P. (2012). The identification of unfolding facial expressions. Perception, 41, 532–555.

Frank, M. G., Ekman, P., & Friesen, W. V. (1993). Behavioral markers and recognizability of the smile of enjoyment. Journal of Personality and Social Psychology, 64, 83–93.

Fridlund, A.J. (1994). Human facial expression: An evolutionary view. New York: Academic Press

Geller, T. (2008). Overcoming the uncanny valley. IEEE Computer Graphics and Applications, 28, 11-17.

Goeleven, E., de Raedt, R., Leyman, L., & Verschuere, B. (2008). The Karolinska Directed Emotional Faces: A validation study. Cognition and Emotion, 22, 1094–1118.

Gunnery, S. D., & Hall, J. A. (2014). The Duchenne smile and persuasion. Journal of Nonverbal Behavior, 38, 181-194.

Gunnery, S. D., & Ruben, M. A. (2016). Perceptions of Duchenne and non-Duchenne smiles: A meta-analysis. Cognition and Emotion, 30, 501-515.

Jack, R. E., Garrod, O. G. B., & Schyns, P.G. (2014). Dynamic facial expressions of emotions transmit an evolving hierarchy of signals over time. Current Biology, 24, 187–192

Kappas, A., & Descóteaux, J. (2003). Of butterflies and roaring thunder: Nonverbal communication in interaction and regulation of emotion. In P. Philippot, E.J. Coats, & R,S. Feldman (Eds.), Nonverbal behavior in clinical settings. New York: Oxford University Press.

Krumhuber, E., & Kappas, A. (2005). Moving smiles: The role of dynamic components for the perception of the genuineness of smiles. Journal of Nonverbal Behavior, 29, 3-24.

Krumhuber, E. G., Likowski, K. U., & Weyers, P. (2014). Facial mimicry of spontaneous and deliberate Duchenne and Non-Duchenne smiles. Journal of Nonverbal Behavior, 38, 1-11.

Krumhuber, E., Manstead, A. S. R, & Kappas, A. (2007). Temporal aspects of facial displays in person and expression perception. The effects of smile dynamics, head-tilt and gender. Journal of Nonverbal Behavior, 31, 39-56.

Krumhuber, E., Manstead, A. S. R, Cosker, D., Marshall, D., & Rosin, P. L. (2009). Effects of dynamic attributes of smiles in human and synthetic faces: A simulated job interview setting. Journal of Nonverbal Behavior, 33, 1-15.

Krumhuber, E., Manstead, A. S. R, Cosker, D., Marshall, D., Rosin, P. L., & Kappas, A. (2007). Facial dynamics as indicators of trustworthiness and cooperative behavior. Emotion, 7, 730-735.

Krumhuber, E, & Manstead, A. S. R. (2009). Can Duchenne smiles be feigned? New evidence on felt and false smiles. Emotion, 9, 807-820.

Krumhuber, E. G., Kappas, A., & Manstead, A. S. R. (2013). Effects of dynamic aspects of facial expressions: A review. Emotion Review, 5, 41-46.

Krumhuber, E., Tamarit, L., Roesch, E. B., & Scherer, K. R. (2012). FACSGen 2.0 animation software: Generating 3D FACS-valid facial expressions for emotion research. Emotion, 12, 351-363.

Krumhuber, E., & Scherer, K. R. (2011). Affect bursts: Dynamic patterns of facial expression. Emotion, 11, 825-841.

Krumhuber, E., & Scherer, K. R. (2016). The look of fear from the eyes varies with the dynamic sequence of facial actions. Swiss Journal of Psychology, 75, 5-14.

Krumhuber, E., Skora, L., Küster, D., & Fou, L. (in press). A review of dynamic datasets for facial expression research. Emotion Review.

Krumhuber, E., & Skora, L. (in press). Perceptual study on facial expressions. In B. Müller & S. Wolf (Eds.), Handbook of Human Motion. Heidelberg, Germany: Springer-Verlag

Küster, D., Krumhuber, E., & Kappas, A. (2014). Nonverbal behavior online: A focus on interactions with and via artificial agents and avatars. In A. Kostic & D. Chadee (Eds.), Social Psychology of Nonverbal Communications (pp. 272-302). New York, NY: Palgrave Macmillan.

MacDorman, K. F., Green, R. D., Ho, C.-C., & Koch, C. T. (2009). Too real for comfort? Uncanny responses to computer generated faces. Computers in Human Behavior, 25, 695-710.

Mori, M. (2012). The Uncanny Valley. (K. F. MacDorman & N. Kageki, Trans.). IEEE Robotics and Automation Magazine, 19, 98-100.

Niedenthal, P. M., Mermillod, M., Maringer, M., & Hess, U. (2010). The simulation of the smiles (SIMS) model: Embodied simulation and the meaning of facial expression. Behavioral and Brain Sciences, 33, 417-433.

Pantic, M. (2009). Machine analysis of facial behaviour: Naturalistic and dynamic behaviour. Philosophical Transactions of the Royal Society B, 364, 3505-3513.

Russell, J. A. (1994). Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychological Bulletin, 115, 102-141.

Saygin, A. P., Chaminade, T., Ishiguro, H., Driver, J., & Frith, C. (2012). The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid actions. Social Cognitive and Affective Neuroscience, 7, 413-422.

Smith, C. A., & Scott, H. S. (1997). A componential approach to the meaning of facial

expressions. In J. A. Russell & J. M. Fernandez-Dols (Eds.), The psychology of facial expression (pp. 229–254). Cambridge, UK: Cambridge University Press.

Tottenham, N., Tanaka, J. W, Leon, A. C., McCarry T., Nurse, M., Hare, T. A., Marcus, D. J, Westerlund, A., Casey, B. J., & Nelson, C (2009). The NimStim set of facial expressions: judgments from untrained research participants. Psychiatry Research,168, 242-249.

With, S., & Kaiser, S. (2011). Sequential patterning of facial actions in the production and perception of emotional expressions. Swiss Journal of Psychology, 70, 241–252.

 

Print Friendly, PDF & Email
Share