Am I in trouble? Interpreting facial expressions

Throughout this semester, I have explored the science behind issues all (graduate) students face in their daily emotional lives. I talked about what studying neuroscience actually entails, how caffeine consumption affects affect, what’s the relationship between cognitive load and emotions, and the neural underpinnings of anxiety during public speaking (which was an actual in class presentation). We are missing only one essential aspect of this journey: how do you tell if you are in trouble with your advisor?

You can tell how other people are feeling by interpreting their facial expressions. Ekman and Friesen (1978) developed Facial Action Coding System (FACS), a manual for interpreting every anatomically possible facial movement and expression. He then categorized certain emotions based on the movements involved in expressing them, which, when simplified, amounts to something like this:

[basic assumptions of Ekman’s facial action coding system illustrated using the face of Tim Roth, the actor portraying Dr. Cal Lightman in Lie to Me (  – a tv drama employing FACS to solving crimes. But since we are all scientists here and we value accuracy, remember to check out the critique of the show by Dr. Ekman himself here (] [source:]

basic assumptions of Ekman’s facial action coding system illustrated using the face of Tim Roth, the actor portraying Dr. Cal Lightman in Lie to Me – a tv drama employing FACS to solving crimes. But since we are all scientists here and we value accuracy, remember to check out the critique of the show by Dr. Ekman himself]

The underlying assumption of categorizing emotions based on the facial expressions accompanying the feeling is that (something about discrete emotions). The study of emotional expression goes back to Darwin and his 1872 book The Expression of the Emotions in Man and Animals.

Blair (2003) reviews the literature on the neural mechanisms necessary for interpretation of emotional expressions. One crucial assumption is that emotional expressions serve a communicatory role: Blair argues that expressions of sadness, fearfulness, and happiness will serve as reinforces modulating future behavior, while displays of anger or embarrassment do not act as stimuli for learning but instead are important modulators of current behavioral responses. An alternative hypothesis on the role of emotional expressions was proposed by Ekman (1997): while it is true that emotional displays transform information, it is not their primary purpose; they are in fact an automatic consequence of the experience of emotion.

Either way, whether it’s their primary or secondary purpose – emotional expressions do convey information that, when interpreted successfully, is a powerful basis for reaction.

Recognizing emotional facial expressions involves a large number of different structures in the brain, among them:

the occipitotemporal cortices – this is where the perception of a face happens,

amygdala – recognizes that the perceived face is indeed emotional whether you want to interpret the other person’s emotional expression,

orbitofrontal cortex – activated when you are actively seeking information about the emotional facial expression, and

somatosensory related cortices and the basal ganglia – which is where it gets a bit trickier, because we are not exactly sure what happens here. One interpretation of data from lesion studies where patients with damage to this area is that viewing facial emotional expression triggers an emotional response of our own that mirrors the emotion we just recognized in our interlocutor. Feeling what the other person feels aids our recognition process (Adolphs, 2002; Kesler/West et al., 2001).

Now that we have the spatial aspect of emotional face processing, Eimer and Holmes (2002) investigated the temporal aspect of this process using ERP. They asked their volunteers to view pictures of different faces (either fearful or neutral) and houses (serving as a control stimulus). Turns out, recognizing a face as emotional begins as early as 115ms after presentation with activation in the fronto-central regions, and the entire process doesn’t really last more than about 300ms (as the posterior regions activate approximately 255-270ms after stimulus presentation). So really, it just takes a quick glance at your advisor’s face to realize whether you’re in trouble.

But are some emotional expressions recognized faster than others? Fox and colleagues (2010) studied that question in the context of angry faces (something very relevant when you forget about a deadline). They presented their participants with a group of faces and asked them to judge whether all the faces had the same expression, or whether one face had a different expression than the others; they manipulated duration of display, number of faces, or whether the faces were presented upside down. Turns out, the participants were faster in detecting that the display had all happy faces (vs. all angry faces – the angry faces drew more attention, so it took more time to decide that they were all the same); but when the displays contained an odd face, the participants were faster to detect an angry face among happy faces (vs. a happy face among angry faces). We are faster to process angry faces!

But in order to process emotional faces at all, Pessoa, McKenna, Gutierre, and Ungerleider (2002) argue that we need to actively attend to them. Here’s the issue: data suggests that emotional stimuli activate appropriate brain regions automatically (in some task an aversive stimulus is presented so quickly it doesn’t really make it all the way into our conscious processing, yet the brain responds to it by activating appropriate networks – so I don’t have to pay attention to it in order for my brain to respond), but we know that interpreting an emotional face involves activation of the occipitotemporal cortices responsible for visual processing that is guided by attention.  So which one is it, attention or no attention? Pessoa and colleagues set off to investigate this using fMRI; they presented their participants with emotional faces and asked them to either focus on the faces, or focus on solving a different task (deciding whether two bars were of similar orientation). They found that the recognition process (as indicated by activation of appropriate brain networks I outlined before) was modulated by attention – if you were too busy to pay attention to the face, you did not recognize its emotion. Simple as that.

So my fellow students – one direct look at your advisor’s face will tell you whether you are in trouble, and if you are in trouble – you will realize that very, very fast.

Happy holidays!




Adolphs, R. (2002). Neural systems for recognizing emotion. Current Opinion in Neurobiology, 12, 169-177.

Blair, R.J.R. (2003) Facial expressions, their communicatory functions and neuro-cognitive substrates. Philosophical Transactions of the Royal Society of London, 358, 561-572.

Eimer, M., & Holmes, A. (2002). An ERP study on the time course of emotional face processing. NeuroReport, 13(4), 427-431.

Ekman, P., & Friesen, W. (1978). Facial Action Coding System: A Technique for the Measurement of Facial Movement. Palo Alto, CA: Consulting Psychologists Press.

Ekman, P. (1997). Should we call it expression or communication? Innovations Social Sciences Research, 10, 333-344.

Fox, E., Lester, V., Russo, R., Bowles, R.J., Pichler, A., & Dutton, K. (2010). Facial expressions of emotion: Are angry faces detected more efficiently? Cognition and Emotion, 14(1), 61-92.

Kesler/West, M.L., Andersen, A.H., Smith, C.D., Avison, M.J., Davis, C.E., Kryscio, R.J., & Blonder, L.X. (2001). Neural substrates of facial emotion processing using fMRI. Cognitive Brain Research, 11, 213-226.

Pessoa, L., McKenna, M., Gutlerrez, E., & Ungerlelder, L.G. (2002). Neural processing of emotional faces requires attention. Proceedings of the National Academy of Sciences, 99(17), 11458-11463.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.