ISRE Matters

Filed under: Uncategorized
Share

Professor Arvid Kappas, May 2017

 

Emotions, One, Two, Many 

Arvid Kappas, Department of Psychology, Jacobs University Bremen

a.kappas@jacobs-university.de

July 2017 – As it turns out, empathy is a topic close to my heart and one that has played an important part in my research agenda over the last three decades, so I would like in my last ISRE Matters column to just mention some of my own thoughts and work, how I came to the view that empathy is a construct central to emotion, because the notion of emotion in the individual without at least an implicit social context is basically flawed. I’ll then say something about how one can extend concepts that relate to emotion processes between two people to the much more complex situation of how emotions spread on the Internet to form collective emotions; and finally I’ll describe a project where we tried to teach empathy to robots so that they can be better tutors for children.

One

Figure 1. The classic paradigm in experimental psychology to assess affective responses isolates the participant from social influences at a physical level. However, research (e.g., Fridlund, 1991) suggests that implicit social effects remain (photo: Jacobs University).

My initial views about how to study emotions were very much influenced by Klaus Scherer, who was my mentor and supervisor in Giessen, Germany, when I studied Psychology; and by Paul Ekman with whom Scherer’s group had frequently been in contact at the time. Based on the assumption that social context tends to influence the expression of emotions, it appeared best to isolate subjects by placing them in a chamber and presenting stimuli or tasks while they were alone, so that one could see the emotional reaction, apparently untainted by social noise, display rules, or cultural context. In other words – emotions would be studied in individuals cut off from the social context.

Having said that, this was not the approach taken in the experimental paradigm that I used (together with Ursula Hess as the basis of my Masters research (1984-86).  Rather, Ursula and I used a mock videophone interaction where the stimulus person appeared to talk to the subjects one-on-one. We were interested in how changes in the voice, intonation, and facial activity would affect people’s perception of emotions and attitudes (see Hess, Kappas, & Scherer, 1988 – for details on that research). We felt at the time that emotions were more likely to happen in social situations and that the communication of these emotions would affect the ongoing interaction, typically depicted by Scherer and his colleagues as a Brunswikian Lens Model.

Two

When I moved to Dartmouth College in 1986 to work with John Lanzetta, I was very much affected by the way in which John, who was an engineer before he was a psychologist, would think of interaction. Everything was about closed loops and feedback processes. On this approach, empathy was a critical element for the regulation of dyadic interaction. Rather than imagining interaction to resemble a ping-pong match, where first Person A communicates with Person B and then Person B with Person A, it appeared better to think of interaction as a closed system with several concurrent feedback loops. The idea that social context might affect expressions, and that expressions might impact subjective experience and physiological responses of emotions via facial feedback, led me to propose the super lens model in 1991.

Because I assumed that we tend to automatically empathize with people, I have developed a view critical of the “classic experimental paradigm” of isolating subjects and confronting them with non-social stimuli. In this context, I consider that the subject, cut off from social interaction in a rather artificial manner will quickly become a free monadic radical (Kappas, 2013) that is ready to connect to any social context (typically implicit) that is available – for example the experimenters, other waiting subjects, etc. In that sense, and following Fridlund’s suggestion, I assumed that we always display expressive behavior to the people in our head (Fridlund, 1991; see also Hess, Banse, & Kappas, 1995).

These days I believe that empathy is not always automatic, but moderated by the social relationship we have to a person. This could be understood as an ingroup/outgroup phenomenon – that is, as depending on whether someone is inside our moral circle or not. A different facet is how much humanity we grant our explicit or implicit interactant (see Krumhuber et al. 2015). A corollary of the belief that emotion and empathy are always associated with closed feedback loops, is my tenet that it is difficult, and perhaps not useful, to distinguish between emotion and its regulation, as there are always regulatory processes, both social and within the individual, that are at work (Kappas, 2011). 

Figure 2. Hyper lens model. Nested communication systems between seven people featuring feedback loops. From Kappas (2013).

Many
The leap from psychological processes in the individual to dyadic processes is already a great challenge for the experimental emotion researcher. But increasingly we interact with large numbers of people over the Internet. Not all of these processes are in real time, but they clearly affect us. Research in this area is truly challenging. The CYBEREMOTIONS project studied how emotions are elicited, communicated, and spread over social networks (Garcia et al., 2016). In this context we can think about networks of empathic processes – a fascinating new area of research, but one that is particularly challenging due to the new methods that are required to measure and analyze changes in affective states. At a time when political decisions are a function of how many people react to mediated material, such as tweets, images on news sites, or in social media or comments to articles or blogs, it becomes important to understand how and why we seem to emphasize with some but not others; in some circumstances, but not others; and with different time courses. We know way too little about processes at this scale. 

Figure 3. Arvid explains empathy depicted on the display of a pavilion on Lisbon’s Praça do Comércio on the occasion of the ICT 2015.

Enter the Robot

I, in the meanwhile, have started to teach empathy to robots. Together with a group of excellent researchers from different disciplines and countries, the EMOTE project developed robotic tutor systems that are designed to respond to the child’s affective state and, for example, adjust teaching strategies as a function thereof. To me, this was a brilliant exercise, one that started from the question of how to define empathy in a way that would allow one to build empathic systems, even when there is no commonly agreed upon definition of empathy; and then moved on to investigating how to give the robot advantages over what a human might do (e.g., access to physiological activation data) to compensate for the fact that humans are much better than robots at really understanding a situation, and its implications, as well as visible affective responses. This research has attracted a lot of attention: we were, for example, invited to the CES in Las Vegas in 2016 to present our work as part of a session on transforming education; and we were featured by the European Commission at its conference on ICT research that it supported.

Figure 4. Much of EMOTE’s research was using NAO robots (Softbank Robotics) as a platform to study artificial empathic tutors.

EMOTE is over, but I have recently received a grant to continue this research in the context of a European Training Network that will fund 15 PhD students distributed over labs from Portugal to Sweden, from 2017 to 2021. If you know of a psychology student who will have a Master’s degree at the end of the year 2017 who would want to work in this context and has perhaps some skills relating to robotics – please direct them to me.

Check out this video on our research

Closing Remarks

This is very likely my last presidential column in the Emotion Researcher as the time of my tenure as ISRE’s president comes to a close after four years (2 x 2). It has not always been smooth sailing, but always exciting. I am deeply grateful for having had the possibility to serve the society. ISRE was founded while I was a student in Klaus Scherer’s lab (see above) and I was immediately impressed by the concept of a society so international, and at the same time so interdisciplinary. At the time membership and attendance at ISRE conferences was restricted to senior researchers and the founding group still reads like a Who’s Who. We have opened the society to researchers who have not yet received their PhD and we have made it easier to join. We have started to use social media and creative forms of communicating.

When I joined ISRE, the newsletter was a small black and white printed thing of perhaps 8 pages and now its heir – Emotion Researcher – is a singing and dancing Internet offering – thanks to Andrea Scarantino’s overhaul of the concept. I am thankful for your enormous work, Andrea! This will shape our communication to the rest of the world for years to come. Thank you also for the detailed feedback on my columns over the years. They tend to jump like a bunny from topic to topic and have a very “spoken” style. Thus, my writing always benefitted from your comments. I am very excited that we have two new editors of the Emotion Researcher: Carolyn Price and Eric Walle. This is the first issue edited by the two. Carolyn and Eric, I wish you all the best. Keep it relevant, keep it up-to-date.

At the St Louis conference from July 26-29, I will thank more people. I hope to see you there. If you have not decided to come – change your mind, we have an excellent conference lined up and we want you there. The next US conference is likely in four years, so now is a good time. It’s not too late to book those seats and join us in the Chase Park Plaza Hotel. More info elsewhere in this issue of Emotion Researcher.

References

Fridlund, A. J. (1991). The sociality of solitary smiles: effects of an implicit audience. Journal of Personality and Social Psychology, 60, 229–240.

Garcia, D., Kappas, A., Kuester, D., & Schweitzer, F. (2016). The dynamics of emotions in online interaction. Royal Society Open Science, 3: 160059. (PDF)

Hess, U., Banse, R., and Kappas, A. (1995). The intensity of facial expression is determined by underlying affective state and social situation. Journal of Personality and Social Psychology, 69, 280–288.

Hess, U., Kappas, A., & Scherer, K. R. (1988). Multichannel communication of emotion: Synthetic signal production. In K. R. Scherer (Ed.), Facets of emotion: Recent research (pp. 161-182). Hillsdale, NJ: Erlbaum.

Kappas, A. (2013). Social regulation of emotion: Messy layers. Frontiers in Psychology, 4(51), 1–11. (PDF).

Kappas A. (2011). Emotion and regulation are one! Emotion Review. 3, 17–25.

Kappas, A. (1991). The illusion of the neutral observer: on the communication of emotion. Cahiers de Linguistique Française 12, 153–168. (PDF)

Krumhuber, E. G., Swiderska, A., Tsankova, E., Kamble, S. V., & Kappas, A. (2015). Real or artificial? Intergroup biases in mind perception in a cross-cultural perspective. PLoS ONE, 10, e0137840. (PDF)

Share

Leave a comment

Traffic Count

  • 208081Total reads:
  • 400Reads today:
  • 164595Total visitors:
  • 336Visitors today: