Why happy vacuum cleaners are good for us
These are exciting times for emotion researchers. Of course, to a certain degree all times are exciting for emotion researchers – after all, we have the privilege of working on something that permeates the essence of the human experience like few other fields of study. An additional bonus is that in recent times one has to deal less and less with skeptics about the value of emotion research, contrary to what has happened for many years in the past, be it in the context of radical behaviorism, or during the cognitive revolution. Even fields like economics, traditionally grounded in rational choice theory, have lately started to be seriously interested in affective processes. Affective neuroscience is blooming as a field of study, so times are good for us!
The number of journals dealing explicitly with affective phenomena is considerable, and even the number of societies is growing – see the founding of the Society for Affective Sciences (SAS) that recently held their first successful conference. ISRE is thriving with growing membership, exciting conferences, a stellar journal and all kinds of activities – the online newsletter you are presently reading being one of them. Emotion research has arrived.
And yet, what excites me personally the most is that my vacuum cleaner is happy. Let me explain. In 2005, at the ISRE meeting in Bari, I had organized a symposium on artificial emotions. In my own presentation, I started reading a self-penned short fictional diary about my experiences with a robotic vacuum cleaner that involved motivated and affective behavior. After a short while, the diary reported, things started to go in unexpected directions, and I decided to switch off the emotion program, disappointed by all the side effects of the little machine having “emotions”. However, after a short while I missed “my happy vacuum cleaner” as a companion, as I had bonded quickly with it, and so I decided that the companionship aspect mattered more than the cleaning aspect. The short diary can be found here.
The point I wanted to make back in 2005 was that interactions between artificial systems and humans potentially offer a great window into how interactions work between humans. Here I want to make a slightly different point. Over the last few years, I have been involved in several projects, formally and informally, that are often referred to as belonging to the field of affective computing. This term, originally coined by Rosalind Picard at MIT, refers to an interdisciplinary endeavor that involves making machines emotion-savvy. This may involve simulating emotional behavior, or being responsive to emotional displays and other emotional behaviors of external entities, including humans. The interactions with colleagues from fields as different as physics, mathematics, computer science, engineering, neuroscience, and education have sensitized me to several issues, such as:
- People outside emotion research are still perplexed at the confusion regarding emotion definitions. “How difficult can it be?”, they ask. There are reasons, of course, why definitional disagreements persist, and we must learn to better communicate that this is not due to people not being able to make up their minds, but to genuine challenges we face in shedding light on highly complex phenomena. Clarifying what underlies our definitional disagreements is likely to promote useful research in the future.
- Many of the debates within emotion science are hard to translate in “real world” terms, and communicate to the public at large or to scientists and scholars working in other areas. To the outside world, some of our central debates appear to resemble discussions about how many angels can dance on the head of a pin.
- Implementation challenges have a way to raise simple questions such as “How long does an emotion last?” that turn out to be surprisingly challenging. An interesting lesson here is that perhaps it is useful for theoreticians to actually talk with implementers to see what questions they need us to answer. This might turn out to be a remarkably fruitful strategy for circumnavigating some impasses in purely theoretical discourse.
- Our theories and assumptions about emotions differ with regard to how easily they can be translated into a physical implementation.
The idea of modeling emotional systems has always intrigued me. I distinctly remember getting hooked, 30 years ago, to the computer magazine BYTE and its musings regarding artificial intelligence (AI). At the time, I was a student assistant in the laboratory of Klaus Scherer in Giessen, Germany. Around 1983, Klaus started to discuss and publish his Component Process Model of emotions. As I approached my Master’s Degree (Diplom) and started planning for my PhD, I wanted to build an AI model that combined Scherer’s theory with a model of physiology and behavior.
I realized that, given that different physiological systems have different time courses, it would be very difficult to model how the dynamically unfolding emotion would affect various physiological processes, all with overlapping phenomena, transfer processes, and so forth. Still, I found emotional modeling to be an irresistibly exciting idea and I wanted to pursue it. Thanks to my friend Kim Silverman, who was at the time working on his PhD with Ann Cutler at the Applied Psychology Unit (APU) of the Medical Research Council (MRC) in Cambridge, I got to talk to some of the smart folks at the APU who eventually let me know, gently, politely, but firmly, that I might just start to work on a pedal-driven space vehicle as something more realistic.
The core problem was that too little was known about emotions, about physiology, and about the general architecture of affective modeling. Undeterred, I ended up going to grad school at Dartmouth College with the goal of learning more about emotions, physiology and AI. Of course, I got distracted along the way and shifted my interest to intra- and interpersonal emotion regulation, which turned out to be the topic of my PhD thesis in 1989.
After this little bit of personal history, you can understand why even the limited emotional capabilities of my vacuum cleaner can excite me. But it must be clarified that the goal of much of affective computing research today is quite different from what I had in mind when I first got interested in the subject. My main goal was to build an artificial agent that could help me sort out the predictions of a complex theory, and eventually subject the theory to conclusive empirical tests. I was not alone in this attempt to test and improve theories through modeling. Many colleagues, including some of our early ISRE members, have worked on similar projects over the past 30 years. I will not try to enumerate them all for fear of forgetting someone – you know who you are – but I want to mention at least Andrew Ortony, Jerry Clore, and Allan Collins.
These days, the modeling of artificial emotional agents is driven by practical rather than theoretical concerns coming from the world of business, medicine, engineering, social work, the military, and so on. For instance, would a rescue robot endowed with a suitably calibrated fear system be a more effective tool than a non-emotional counterpart? Would a machine sensitive to human emotional expressions be a better negotiator? Would a pet that simulates the emotional expressions of an actual dog provide genuine companionship to an ailing patient? These are all hard questions, and they are driving research in a variety of fields.
So let us assume now, for the sake of argument, that you are approached by an engineer intent on building an artificial fear system in a robot: “Dear emotion specialist, what do I need to do to build such a system?” This is where it gets interesting. Debates that tend to grip much of our attention (as often testified by our listserv discussions) appear all of a sudden less central, such as the debate on the role of nature vs. nurture or the debate on the role of language in emotional phenomena.
What really matters is what the fear system does for the organism – what triggers it, what the consequences of its activation are, and what differences in implementation are required for transitioning from a biological to an artificial system. A heart rate increase may suddenly look like an increase in electrical power consumption, a behavioral predisposition may look like a change in the probabilities of activating behavioral options, impulsivity may look like a change in perceived probabilities of risk, etc etc.
In other words, when faced with a practical problem of implementation, one has to start thinking in procedural terms about emotions, in terms of functions and systems embedded in other systems. This has the potential of transforming how one thinks about emotions. In particular, the need to make artificial emotional systems “work” can reveal limitations in our own theorizing. As it turns out, a surprising number of engineers who have given a cursory read to some introductory texts in the emotion literature are still convinced that there are 6 or 7 basic emotions and that it is known what they look like on the human face.
All an artificial system needs to do, on this view, is to have the FACS patterns for each of these emotions at its robotic fingertips, so to speak, and use it to infer whether the interactant is angry or happy. This simple modeling assumption came to a crushing end when robots started getting confused by smiles. The point is that humans tend to smile a lot, whether they are angry or happy. Clearly, if one wants to design systems that can decipher human emotional displays, one needs to start taking the social context seriously. What a smile means, surprise surprise, depends on the context.
This is just one example of how the pressure to build actually working systems can lead to tremendous progress. This is why I find the recent developments in affective computing and robotics so exciting. Trying to realize what Sci-Fi a long time ago suggested was possible affords us a unique opportunity to engage in a practice-driven form of progress in emotion science. We are the experts, we are the ones who should be consulted – and we will benefit from interactions with implementers. One of the advantages is that we might be forced to make an informed guess in cases where we do not know. My hunch is that this guess might sometimes be the nudge that gets us out of a loop and pushes us to develop our theories further and more concretely.
By no means do I want to downplay the epistemological and moral issues raised by building artificial emoting machines. However, I do think that the specific challenges related to physically implementing some of our abstract ideas on what emotions are and how they work can provide a salutary kick in the butt of our science in ways that endless debates in the abstract may never provide.
If you are a not familiar with Affective Computing and “artificial emotions”, a first step is to learn more about such research, for example in this article that is addressed to the public at large. There are specialized outlets, such as the highly interdisciplinary IEEE Transactions on Affective Computing. And last but not least, ISRE meetings offer a great venue for interdisciplinary encounters between theoreticians and implementers. These interdisciplinary collaborations require much patience on both sides, and demand specific efforts to get familiar with unusual terminology and thinking styles. Expect more discussion on such issues also in the upcoming ISRE2015 in Geneva. And so I hope that quite a few of us will benefit from thinking about happy vacuum cleaners 🙂.