Enterprise

Why emotion AI sets off warning bells, but sentiment analysis does not

Many researchers and experts agree that sentiment analysis and emotion AI are two very different things.

mad, confused, and skeptical emoji on a laptop screen

Sometimes the terms "sentiment analysis" and "emotion AI" are used interchangeably.

Illustration: Christopher T. Fong/Protocol

When Zoom introduced new features last week to analyze customer sentiment during sales or business meetings based on conversation transcripts, the company said it is also considering the addition of a different but controversial form of AI to that service in the future: emotion AI. Other companies already include emotion AI — sometimes referred to as “affect AI” or “emotion recognition” — in sales and virtual school software.

Although both sentiment analysis and emotion AI aim to understand people’s attitudes and feelings, many researchers and experts agree that they are two very different things, even when sentiment analysis incorporates AI approaches such as deep learning.

Still, sometimes the terms have been used interchangeably, which might cause confusion. For example, when Fight for the Future launched a campaign last week urging Zoom not to adopt emotion AI in its videoconferencing software, the organization referred to both terms synonymously. It updated its campaign Thursday after this story was published.

“Sentiment analysis, like facial recognition in general, is inherently biased,” wrote the group. “These tools assume that all people use the same facial expressions, voice patterns, and body language—but that’s not true.”

The thing is, sentiment analysis typically has nothing to do with facial data. In fact, an important distinction between sentiment analysis and emotion AI is in the data sources these technologies use to generate their conclusions.

Sentiment analysis and words

Sentiment analysis tools mine text to gauge people’s opinions or attitudes toward something. Since the early days of social media, sentiment analysis and social media monitoring software providers have categorized the text in public posts, tweets and product reviews, analyzing their content in an attempt to determine what social posts say about products, retailers, restaurants or even politicians.

What do people think about a new Oreo filling flavor or President Biden’s latest initiative? Sentiment analysis offers clues.

In essence, sentiment analysis is about language, said Nandita Sampath, a policy analyst with Consumer Reports focused on algorithmic bias and accountability issues. “Sentiment analysis, in my opinion, is more analyzing tone from either text or speech,” she said.

Emotion AI and the face

Even though emotion or affect AI attempts to detect people’s sentiments, it goes about it in a different way and uses forms of data that classic sentiment analysis does not. While sentiment analysis is all about words and text, emotion AI typically is about the face and facial expressions.

Rana el Kaliouby, co-founder and CEO of emotion AI provider Affectiva and a longtime researcher in the field, agreed. “Sentiment analysis is usually text-based or word-based analysis,” she told Protocol.

Instead, el Kaliouby said, emotion AI analyzes facial expressions and sometimes incorporates other signals such as vocal and even physiological data. Technology she helped develop for Affectiva, now part of driver-monitoring AI company Smart Eye, was built using data representing millions of faces from people in 75 countries.

“Obviously, you can infer someone's emotion from tone, but emotion or affect recognition is more about analyzing someone's physical characteristics,” said Sampath, who said she has defined emotion recognition as AI that attempts to predict emotions in real time based on someone’s faceprint. Sometimes emotion AI might even look to other forms of biometric data, such as a person’s gait, she said.

Because emotion AI typically relies on using computer vision to capture and recognize facial imagery, it is often referred to in relation to facial recognition.

Indeed, in her discussion of emotion AI in her 2021 book “Atlas of AI,” Kate Crawford, an AI ethics scholar, research professor at USC Annenberg and a senior principal researcher at Microsoft Research, wrote: “Whereas facial recognition attempts to identify a particular individual, affect detection aims to detect and classify emotions by analyzing any face.” She explained that the immense volume of facial imagery gleaned from social media platforms has helped fuel AI that aims to detect emotions.

Sentiment analysis as a form of emotion AI

Nazanin Andalibi, a doctor of information studies and an assistant professor at the University of Michigan School of Information who studies AI used to detect emotion, agreed that there are distinctions to be made between sentiment analysis and emotion AI, and that concerns around validity or bias may be more or less pronounced depending on what data sources are used and what is being measured.

However, she sees deeper connections between sentiment analysis and emotion AI. In fact, she considers sentiment analysis using text to recognize what she calls “affective phenomena” to be a form of emotion AI, and more broadly a tool in affective computing systems.

“One of the critiques I have of existing discourse around emotion AI is that there is so much focus on facial recognition,” Andalibi said, pointing to other affective computing systems intended to detect emotion that use data including text, social media data and other computing behavior data, as well as biometric data such as voice and facial data.

While she said she believes facial recognition technology is “problematic” and “terrible,” she adeded, “One reason I am concerned about just focusing on problems with the face or voice is that this may support stakeholders — like those purchasing and deploying these technologies, [such as] regulators, technologists or other actors — to move away from the collection of facial or voice data and simply shift to other sensitive data types without truly addressing their fundamental harmful implications, even if and when there are no bias, validity or accuracy concerns.”

The controversy around facial data

Even though the goal of these computing systems — to understand how people feel — is the same no matter what their data inputs, many people see very important distinctions between the words we write or speak and the expressions our faces make. While interpreting the sentiments of what people write or say has its own set of problems (sarcasm, anyone?), sentiment analysis of language has not been subject to the intense level of criticism that emotion AI using facial expression data has.

The validity of emotion AI using facial expressions to gauge someone’s feelings has been seriously questioned, and often raises ethical concerns. Not only do some researchers believe the ways people express emotions such as joy, anger, fear and surprise vary across cultures and situations, but people often do not consciously project what they are thinking or feeling through their facial expressions. In contrast, people choose what they post online and what they say.

Indeed, what others might interpret from someone’s facial expressions can be quite different from what that person is actually feeling. In particular, neurodivergent people might express emotion in ways that can be inaccurately interpreted by other people or emotion AI.

As emotion AI is incorporated into more and more everyday tech, the drumbeat against it is growing louder.

In 2019, the AI Now Institute called for a ban on the use of emotion AI in important decisions such as in hiring and when judging student performance. In 2021, the Brookings Institution called for it to be banned in use by law enforcement, noting: “There is insufficient evidence that these technologies work reliably enough to be used for the high stakes of law enforcement. Even worse, they threaten core American principles of civil liberty in a pluralistic society by presuming that facial movements, physical reactions, and tone of voice can be evidence of criminality.”

Most recently, in its open letter to Zoom asking the company to nix potential plans to use emotion AI, Fight for the Future wrote: “The way we move our faces is often disconnected from the emotions underneath, and research has found that not even humans can measure emotion from faces some of the time. Why add credence to pseudoscience and stake your reputation on a fundamentally broken feature?”

This story was updated to reflect that Fight for the Future changed the wording of its campaign after this story was published.

Workplace

Why you should put Netflix binging on your to-do list

The former Todoist productivity expert drops time-blocking tips, lofi beats playlists for concentrating and other knowledge bombs.

“I do hope the productivity space as a whole is more intentional about pushing narratives that are about life versus just work.”

Photo: Courtesy of Fadeke Adegbuyi

Fadeke Adegbuyi knows how to dole out productivity advice. When she was a marketing manager at Doist, she taught users via blogs and newsletters about how to better organize their lives. Doist, the company behind to-do-list app Todoist and messaging app Twist, has pushed remote and asynchronous work for years. Adegbuyi’s job was to translate these ideas to the masses.

“We were thinking about asynchronous communication from a work point of view, of like: What is most effective for doing ambitious and awesome work, and also, what is most advantageous for living a life that feels balanced?” Adegbuyi said.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Sponsored Content

Why the digital transformation of industries is creating a more sustainable future

Qualcomm’s chief sustainability officer Angela Baker on how companies can view going “digital” as a way not only toward growth, as laid out in a recent report, but also toward establishing and meeting environmental, social and governance goals.

Three letters dominate business practice at present: ESG, or environmental, social and governance goals. The number of mentions of the environment in financial earnings has doubled in the last five years, according to GlobalData: 600,000 companies mentioned the term in their annual or quarterly results last year.

But meeting those ESG goals can be a challenge — one that businesses can’t and shouldn’t take lightly. Ahead of an exclusive fireside chat at Davos, Angela Baker, chief sustainability officer at Qualcomm, sat down with Protocol to speak about how best to achieve those targets and how Qualcomm thinks about its own sustainability strategy, net zero commitment, other ESG targets and more.

Keep Reading Show less
Chris Stokel-Walker

Chris Stokel-Walker is a freelance technology and culture journalist and author of "YouTubers: How YouTube Shook Up TV and Created a New Generation of Stars." His work has been published in The New York Times, The Guardian and Wired.

Workplace

It's OK to cry at work

Our comfort with crying at work has changed drastically over the past couple years. But experts said the hard part is helping workers get through the underlying mental health challenges.

Tech workers and workplace mental health experts said discussing emotions at work has become less taboo over the past couple years, but we’re still a ways away from completely normalizing the conversation — and adjusting policies accordingly.

Photo: Teerasak Ainkeaw / EyeEm via Getty Images

Everyone seems to be ugly crying on the internet these days. A new Snapchat filter makes people look like they’re breaking down on television, crying at celebratory occasions or crying when it sounds like they’re laughing. But one of the ways it's been used is weirdly cathartic: the workplace.

In one video, a creator posted a video of their co-worker merely sitting at a desk, presumably giggling or smiling, but the Snapchat tool gave them a pained look on their face. The video was captioned: “When you still have two hours left of your working day.” Another video showed someone asking their co-workers if they enjoy their job. Everyone said yes, but the filter indicated otherwise.

Keep Reading Show less
Sarah Roach

Sarah Roach is a news writer at Protocol (@sarahroach_) and contributes to Source Code. She is a recent graduate of George Washington University, where she studied journalism and mass communication and criminal justice. She previously worked for two years as editor in chief of her school's independent newspaper, The GW Hatchet.

Enterprise

Arm’s new CEO is planning the IPO it sought to avoid last year

Arm CEO Rene Haas told Protocol that Arm will be fine as a standalone company, as it focuses on efficient computing and giving customers a more finished product than a basic chip core design.

Rene Haas is taking Arm on a fresh trajectory.

Photo: Arm

The new path for Arm is beginning to come into focus.

Weeks after Nvidia’s $40 bid to acquire Arm from SoftBank collapsed, the appointment of Rene Haas to replace longtime chief executive Simon Segars has set the business on a fresh trajectory. Haas appears determined to shake up the company, with plans to lay off as much as 15% of the staff ahead of plans to take the company public once again by the end of March next year.

Keep Reading Show less
Max A. Cherney

Max A. Cherney is a senior reporter at Protocol covering the semiconductor industry. He has worked for Barron's magazine as a Technology Reporter, and its sister site MarketWatch. He is based in San Francisco.

Policy

The great onshoring: Inside the transcontinental chip race

The second annual Trade and Technology Council emphasized the centrality of semiconductor onshoring to U.S.-EU military objectives.

For chip manufacturers, free-flowing subsidies for now might come at the cost of a potential overcapacity problem in the longer term.

Illustration: Christopher T. Fong/Protocol

The prospect of global conflict permeated the room at this year’s Trade and Technology Council, which concluded in France earlier this week. The second annual gathering of U.S. and EU officials yielded a joint statement that mentioned some form of “Russia” or “Ukraine” more frequently than “technology,” “regulation,” “investment,” “security” or “competition.”

The conflict in Ukraine, having already escalated into a U.S. proxy war, seemingly convinced the EU to fall in line with the American tech policy agenda.

Keep Reading Show less
Hirsh Chitkara

Hirsh Chitkara ( @HirshChitkara) is a reporter at Protocol focused on the intersection of politics, technology and society. Before joining Protocol, he helped write a daily newsletter at Insider that covered all things Big Tech. He's based in New York and can be reached at hchitkara@protocol.com.

Latest Stories
Bulletins