When Zoom introduced new features last week to analyze customer sentiment during sales or business meetings based on conversation transcripts, the company said it is also considering the addition of a different but controversial form of AI to that service in the future: emotion AI. Other companies already include emotion AI — sometimes referred to as “affect AI” or “emotion recognition” — in sales and virtual school software.
Although both sentiment analysis and emotion AI aim to understand people’s attitudes and feelings, many researchers and experts agree that they are two very different things, even when sentiment analysis incorporates AI approaches such as deep learning.
Still, sometimes the terms have been used interchangeably, which might cause confusion. For example, when Fight for the Future launched a campaign last week urging Zoom not to adopt emotion AI in its videoconferencing software, the organization referred to both terms synonymously. It updated its campaign Thursday after this story was published.
“Sentiment analysis, like facial recognition in general, is inherently biased,” wrote the group. “These tools assume that all people use the same facial expressions, voice patterns, and body language—but that’s not true.”
The thing is, sentiment analysis typically has nothing to do with facial data. In fact, an important distinction between sentiment analysis and emotion AI is in the data sources these technologies use to generate their conclusions.
Sentiment analysis and words
Sentiment analysis tools mine text to gauge people’s opinions or attitudes toward something. Since the early days of social media, sentiment analysis and social media monitoring software providers have categorized the text in public posts, tweets and product reviews, analyzing their content in an attempt to determine what social posts say about products, retailers, restaurants or even politicians.
What do people think about a new Oreo filling flavor or President Biden’s latest initiative? Sentiment analysis offers clues.
In essence, sentiment analysis is about language, said Nandita Sampath, a policy analyst with Consumer Reports focused on algorithmic bias and accountability issues. “Sentiment analysis, in my opinion, is more analyzing tone from either text or speech,” she said.
Emotion AI and the face
Even though emotion or affect AI attempts to detect people’s sentiments, it goes about it in a different way and uses forms of data that classic sentiment analysis does not. While sentiment analysis is all about words and text, emotion AI typically is about the face and facial expressions.
Rana el Kaliouby, co-founder and CEO of emotion AI provider Affectiva and a longtime researcher in the field, agreed. “Sentiment analysis is usually text-based or word-based analysis,” she told Protocol.
Instead, el Kaliouby said, emotion AI analyzes facial expressions and sometimes incorporates other signals such as vocal and even physiological data. Technology she helped develop for Affectiva, now part of driver-monitoring AI company Smart Eye, was built using data representing millions of faces from people in 75 countries.
“Obviously, you can infer someone's emotion from tone, but emotion or affect recognition is more about analyzing someone's physical characteristics,” said Sampath, who said she has defined emotion recognition as AI that attempts to predict emotions in real time based on someone’s faceprint. Sometimes emotion AI might even look to other forms of biometric data, such as a person’s gait, she said.
Because emotion AI typically relies on using computer vision to capture and recognize facial imagery, it is often referred to in relation to facial recognition.
Indeed, in her discussion of emotion AI in her 2021 book “Atlas of AI,” Kate Crawford, an AI ethics scholar, research professor at USC Annenberg and a senior principal researcher at Microsoft Research, wrote: “Whereas facial recognition attempts to identify a particular individual, affect detection aims to detect and classify emotions by analyzing any face.” She explained that the immense volume of facial imagery gleaned from social media platforms has helped fuel AI that aims to detect emotions.
Sentiment analysis as a form of emotion AI
Nazanin Andalibi, a doctor of information studies and an assistant professor at the University of Michigan School of Information who studies AI used to detect emotion, agreed that there are distinctions to be made between sentiment analysis and emotion AI, and that concerns around validity or bias may be more or less pronounced depending on what data sources are used and what is being measured.
However, she sees deeper connections between sentiment analysis and emotion AI. In fact, she considers sentiment analysis using text to recognize what she calls “affective phenomena” to be a form of emotion AI, and more broadly a tool in affective computing systems.
“One of the critiques I have of existing discourse around emotion AI is that there is so much focus on facial recognition,” Andalibi said, pointing to other affective computing systems intended to detect emotion that use data including text, social media data and other computing behavior data, as well as biometric data such as voice and facial data.
While she said she believes facial recognition technology is “problematic” and “terrible,” she adeded, “One reason I am concerned about just focusing on problems with the face or voice is that this may support stakeholders — like those purchasing and deploying these technologies, [such as] regulators, technologists or other actors — to move away from the collection of facial or voice data and simply shift to other sensitive data types without truly addressing their fundamental harmful implications, even if and when there are no bias, validity or accuracy concerns.”
The controversy around facial data
Even though the goal of these computing systems — to understand how people feel — is the same no matter what their data inputs, many people see very important distinctions between the words we write or speak and the expressions our faces make. While interpreting the sentiments of what people write or say has its own set of problems (sarcasm, anyone?), sentiment analysis of language has not been subject to the intense level of criticism that emotion AI using facial expression data has.
The validity of emotion AI using facial expressions to gauge someone’s feelings has been seriously questioned, and often raises ethical concerns. Not only do some researchers believe the ways people express emotions such as joy, anger, fear and surprise vary across cultures and situations, but people often do not consciously project what they are thinking or feeling through their facial expressions. In contrast, people choose what they post online and what they say.
Indeed, what others might interpret from someone’s facial expressions can be quite different from what that person is actually feeling. In particular, neurodivergent people might express emotion in ways that can be inaccurately interpreted by other people or emotion AI.
As emotion AI is incorporated into more and more everyday tech, the drumbeat against it is growing louder.
In 2019, the AI Now Institute called for a ban on the use of emotion AI in important decisions such as in hiring and when judging student performance. In 2021, the Brookings Institution called for it to be banned in use by law enforcement, noting: “There is insufficient evidence that these technologies work reliably enough to be used for the high stakes of law enforcement. Even worse, they threaten core American principles of civil liberty in a pluralistic society by presuming that facial movements, physical reactions, and tone of voice can be evidence of criminality.”
Most recently, in its open letter to Zoom asking the company to nix potential plans to use emotion AI, Fight for the Future wrote: “The way we move our faces is often disconnected from the emotions underneath, and research has found that not even humans can measure emotion from faces some of the time. Why add credence to pseudoscience and stake your reputation on a fundamentally broken feature?”
This story was updated to reflect that Fight for the Future changed the wording of its campaign after this story was published.