Why emotion AI sets off warning bells, but sentiment analysis does not

Many researchers and experts agree that sentiment analysis and emotion AI are two very different things.

mad, confused, and skeptical emoji on a laptop screen

Sometimes the terms "sentiment analysis" and "emotion AI" are used interchangeably.

Illustration: Christopher T. Fong/Protocol

When Zoom introduced new features last week to analyze customer sentiment during sales or business meetings based on conversation transcripts, the company said it is also considering the addition of a different but controversial form of AI to that service in the future: emotion AI. Other companies already include emotion AI — sometimes referred to as “affect AI” or “emotion recognition” — in sales and virtual school software.

Although both sentiment analysis and emotion AI aim to understand people’s attitudes and feelings, many researchers and experts agree that they are two very different things, even when sentiment analysis incorporates AI approaches such as deep learning.

Still, sometimes the terms have been used interchangeably, which might cause confusion. For example, when Fight for the Future launched a campaign last week urging Zoom not to adopt emotion AI in its videoconferencing software, the organization referred to both terms synonymously. It updated its campaign Thursday after this story was published.

“Sentiment analysis, like facial recognition in general, is inherently biased,” wrote the group. “These tools assume that all people use the same facial expressions, voice patterns, and body language—but that’s not true.”

The thing is, sentiment analysis typically has nothing to do with facial data. In fact, an important distinction between sentiment analysis and emotion AI is in the data sources these technologies use to generate their conclusions.

Sentiment analysis and words

Sentiment analysis tools mine text to gauge people’s opinions or attitudes toward something. Since the early days of social media, sentiment analysis and social media monitoring software providers have categorized the text in public posts, tweets and product reviews, analyzing their content in an attempt to determine what social posts say about products, retailers, restaurants or even politicians.

What do people think about a new Oreo filling flavor or President Biden’s latest initiative? Sentiment analysis offers clues.

In essence, sentiment analysis is about language, said Nandita Sampath, a policy analyst with Consumer Reports focused on algorithmic bias and accountability issues. “Sentiment analysis, in my opinion, is more analyzing tone from either text or speech,” she said.

Emotion AI and the face

Even though emotion or affect AI attempts to detect people’s sentiments, it goes about it in a different way and uses forms of data that classic sentiment analysis does not. While sentiment analysis is all about words and text, emotion AI typically is about the face and facial expressions.

Rana el Kaliouby, co-founder and CEO of emotion AI provider Affectiva and a longtime researcher in the field, agreed. “Sentiment analysis is usually text-based or word-based analysis,” she told Protocol.

Instead, el Kaliouby said, emotion AI analyzes facial expressions and sometimes incorporates other signals such as vocal and even physiological data. Technology she helped develop for Affectiva, now part of driver-monitoring AI company Smart Eye, was built using data representing millions of faces from people in 75 countries.

“Obviously, you can infer someone's emotion from tone, but emotion or affect recognition is more about analyzing someone's physical characteristics,” said Sampath, who said she has defined emotion recognition as AI that attempts to predict emotions in real time based on someone’s faceprint. Sometimes emotion AI might even look to other forms of biometric data, such as a person’s gait, she said.

Because emotion AI typically relies on using computer vision to capture and recognize facial imagery, it is often referred to in relation to facial recognition.

Indeed, in her discussion of emotion AI in her 2021 book “Atlas of AI,” Kate Crawford, an AI ethics scholar, research professor at USC Annenberg and a senior principal researcher at Microsoft Research, wrote: “Whereas facial recognition attempts to identify a particular individual, affect detection aims to detect and classify emotions by analyzing any face.” She explained that the immense volume of facial imagery gleaned from social media platforms has helped fuel AI that aims to detect emotions.

Sentiment analysis as a form of emotion AI

Nazanin Andalibi, a doctor of information studies and an assistant professor at the University of Michigan School of Information who studies AI used to detect emotion, agreed that there are distinctions to be made between sentiment analysis and emotion AI, and that concerns around validity or bias may be more or less pronounced depending on what data sources are used and what is being measured.

However, she sees deeper connections between sentiment analysis and emotion AI. In fact, she considers sentiment analysis using text to recognize what she calls “affective phenomena” to be a form of emotion AI, and more broadly a tool in affective computing systems.

“One of the critiques I have of existing discourse around emotion AI is that there is so much focus on facial recognition,” Andalibi said, pointing to other affective computing systems intended to detect emotion that use data including text, social media data and other computing behavior data, as well as biometric data such as voice and facial data.

While she said she believes facial recognition technology is “problematic” and “terrible,” she adeded, “One reason I am concerned about just focusing on problems with the face or voice is that this may support stakeholders — like those purchasing and deploying these technologies, [such as] regulators, technologists or other actors — to move away from the collection of facial or voice data and simply shift to other sensitive data types without truly addressing their fundamental harmful implications, even if and when there are no bias, validity or accuracy concerns.”

The controversy around facial data

Even though the goal of these computing systems — to understand how people feel — is the same no matter what their data inputs, many people see very important distinctions between the words we write or speak and the expressions our faces make. While interpreting the sentiments of what people write or say has its own set of problems (sarcasm, anyone?), sentiment analysis of language has not been subject to the intense level of criticism that emotion AI using facial expression data has.

The validity of emotion AI using facial expressions to gauge someone’s feelings has been seriously questioned, and often raises ethical concerns. Not only do some researchers believe the ways people express emotions such as joy, anger, fear and surprise vary across cultures and situations, but people often do not consciously project what they are thinking or feeling through their facial expressions. In contrast, people choose what they post online and what they say.

Indeed, what others might interpret from someone’s facial expressions can be quite different from what that person is actually feeling. In particular, neurodivergent people might express emotion in ways that can be inaccurately interpreted by other people or emotion AI.

As emotion AI is incorporated into more and more everyday tech, the drumbeat against it is growing louder.

In 2019, the AI Now Institute called for a ban on the use of emotion AI in important decisions such as in hiring and when judging student performance. In 2021, the Brookings Institution called for it to be banned in use by law enforcement, noting: “There is insufficient evidence that these technologies work reliably enough to be used for the high stakes of law enforcement. Even worse, they threaten core American principles of civil liberty in a pluralistic society by presuming that facial movements, physical reactions, and tone of voice can be evidence of criminality.”

Most recently, in its open letter to Zoom asking the company to nix potential plans to use emotion AI, Fight for the Future wrote: “The way we move our faces is often disconnected from the emotions underneath, and research has found that not even humans can measure emotion from faces some of the time. Why add credence to pseudoscience and stake your reputation on a fundamentally broken feature?”

This story was updated to reflect that Fight for the Future changed the wording of its campaign after this story was published.


Election markets are far from a sure bet

Kalshi has big-name backing for its plan to offer futures contracts tied to election results. Will that win over a long-skeptical regulator?

Whether Kalshi’s election contracts could be considered gaming or whether they serve a true risk-hedging purpose is one of the top questions the CFTC is weighing in its review.

Photo illustration: Getty Images; Protocol

Crypto isn’t the only emerging issue on the CFTC’s plate. The futures regulator is also weighing a fintech sector that has similarly tricky political implications: election bets.

The Commodity Futures Trading Commission has set Oct. 28 as a date by which it hopes to decide whether the New York-based startup Kalshi can offer a form of wagering up to $25,000 on which party will control the House of Representatives and Senate after the midterms. PredictIt, another online market for election trading, has also sued the regulator over its decision to cancel a no-action letter.

Keep Reading Show less
Ryan Deffenbaugh
Ryan Deffenbaugh is a reporter at Protocol focused on fintech. Before joining Protocol, he reported on New York's technology industry for Crain's New York Business. He is based in New York and can be reached at rdeffenbaugh@protocol.com.
Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.

The Uber verdict shows why mandatory disclosure isn't such a bad idea

The conviction of Uber's former chief security officer, Joe Sullivan, seems likely to change some minds in the debate over proposed cyber incident reporting regulations.

Executives and boards will now be "a whole lot less likely to cover things up," said one information security veteran.

Photo: Al Drago/Bloomberg via Getty Images

If nothing else, the guilty verdict delivered Wednesday in a case involving Uber's former security head will have this effect on how breaches are handled in the future: Executives and boards, according to information security veteran Michael Hamilton, will be "a whole lot less likely to cover things up."

Following the conviction of former Uber chief security officer Joe Sullivan, "we likely will get better voluntary reporting" of cyber incidents, said Hamilton, formerly the chief information security officer of the City of Seattle, and currently the founder and CISO at cybersecurity vendor Critical Insight.

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@protocol.com.


Delta and MIT are running flight tests to fix contrails

The research team and airline are running flight tests to determine if it’s possible to avoid the climate-warming effects of contrails.

Delta and MIT just announced a partnership to test how to mitigate persistent contrails.

Photo: Gabriela Natiello/Unsplash

Contrails could be responsible for up to 2% of all global warming, and yet how they’re formed and how to mitigate them is barely understood by major airlines.

That may be changing.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol covering climate. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.


Inside Amazon’s free video strategy

Amazon has been doubling down on original content for Freevee, its ad-supported video service, which has seen a lot of growth thanks to a deep integration with other Amazon properties.

Freevee’s investment into original programming like 'Bosch: Legacy' has increased by 70%.

Photo: Tyler Golden/Amazon Freevee

Amazon’s streaming efforts have long been all about Prime Video. So the company caught pundits by surprise when, in early 2019, it launched a stand-alone ad-supported streaming service called IMDb Freedive, with Techcrunch calling the move “a bit odd.”

Nearly four years and two rebrandings later, Amazon’s ad-supported video efforts appear to be flourishing. Viewership of the service grew by 138% from 2020 to 2021, according to Amazon. The company declined to share any updated performance data on the service, which is now called Freevee, but a spokesperson told Protocol the performance of originals in particular “exceeded expectations,” leading Amazon to increase investments into original content by 70% year-over-year.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Latest Stories