Once more with feeling: Why the next wave of AI should be 'in tune' with our emotions

By Sara Saab, VP of Product, Prolific.

  • Friday, 3rd November 2023 Posted 1 year ago in by Phil Alsop

Emotions guide us, fulfil us, and sometimes take us to difficult places inside ourselves. They are the mark of our humanity, but we don’t fully understand them. Some researchers believe there are fewer than six basic emotions. Others say the number stands closer to 21.   

  

As the Artificial Intelligence (AI) revolution continues, understanding human emotions is crucial. For this reason, the debate surrounding ‘emotion AI’ has intensified. Essentially, this is a subset of AI which can identify human emotions from facial expressions, voice inflections, body language and other physical signals.   

  

This is important. The better AI and large language models can empathise with our feelings, the better they can serve us. The concept is not at all new; text-based emotion AI has existed for years. Until now, a common use case has been marketers using sentiment analysis to understand audience perception. But with the technology growing ever-more sophisticated, further use cases are materialising. Consider areas like healthcare, where a high level of emotional intelligence is critical in boosting engagement, comfort and trust. Applications such as therapy chatbots require human-like rapport and personality to be effective.  

  

Emotion AI can also help build a fairer society. An estimated ten percent of the world’s population display the trait of alexithymia, that is, difficulty identifying and distinguishing between emotions. If an AI companion could support these individuals, we’re one step closer to more inclusive societies.   

  

The challenges facing emotion AI  

  

Today, emotion AI promises benefits in real-world scenarios, but faces challenges in understanding the true breadth of human emotions. As a result, groups of policymakers in the European Union and United States are arguing that the technology should be banned.  

  

It’s true that emotions are too complex to be understood through technology alone. They demand human input. Yet a growing number of platforms that market their ability to finetune AIs through human intervention are actually harnessing AI to do this important work.  

 

Now, new research into methods like Reinforcement Learning Through AI Feedback (RLAIF) are showing some merit by allowing AIs to help humans train other AIs at scale. However, unbounded and unchecked, this practice conceivably leads us to an unrecognisable outcome: ‘artificial artificial intelligence’, as Jeff Bezos puts it – a snake eating its own tail. Without learning from human feedback, AI models will make choices that do not reflect the emotional values and needs of diverse populations. It will also put users off; AI needs to feel human for it to appeal; for it to be normalised.  

  

Regulators should therefore direct their attention to poorly developed emotion AI, rather than touting an outright ban. Knee-jerk policies will not serve us in this complex new world, and a ban would fail to consider the numerous benefits of emotion AI.   

  

So how can we improve emotion AI?  

  

Even if scientists spend the next few decades struggling to agree on the definition of human emotions, there are still significant advancements to be made in the field of emotion AI.  

 

Key to developing the emotional intelligence of AI is to use ‘human-in-the-loop’ (HITL) training. HITL, of which Reinforcement Learning by Human Feedback (RHLF) is the current exemplar, requires people - not other AIs! - to provide feedback on information generated by AI. These human annotators rank and rate examples of the AI’s output based on its emotional intelligence. For instance, how empathetic was this AI chatbot? How natural sounding was its response? How well did it understand your emotions?  

  

The responses to each of these questions requires considered feedback. Real-life annotators must therefore be compensated fairly for their time and – importantly – come from a diverse range of backgrounds. After all, there’s no single-lived human experience and diverse life experiences are what shape our emotions.  

  

If we can meet these criteria, then emotion AI will continue to show improvements and efficiency gains over time. The models will not only make choices that reflect the full range of emotions felt by humans, but offer emotionally mature responses that meet the unique needs of the user. In short, only human annotators will be able to help machines learn from lived human experiences, supporting them to make better and empathic decisions.   

  

Emotion AI – trained by HITL – offers a path towards inclusivity, and even towards universal human dignity. By tapping into diverse human experiences, AI can become a more empathetic companion, paving the way for better AI-to-Human interactions, and eventually, better Human-to-Human ones too.