Why emotion AI sets off warning bells, but sentiment analysis does not

Many researchers and experts agree that sentiment analysis and emotion AI are two very different things.

mad, confused, and skeptical emoji on a laptop screen

Sometimes the terms "sentiment analysis" and "emotion AI" are used interchangeably.

Illustration: Christopher T. Fong/Protocol

When Zoom introduced new features last week to analyze customer sentiment during sales or business meetings based on conversation transcripts, the company said it is also considering the addition of a different but controversial form of AI to that service in the future: emotion AI. Other companies already include emotion AI — sometimes referred to as “affect AI” or “emotion recognition” — in sales and virtual school software.

Although both sentiment analysis and emotion AI aim to understand people’s attitudes and feelings, many researchers and experts agree that they are two very different things, even when sentiment analysis incorporates AI approaches such as deep learning.

Still, sometimes the terms have been used interchangeably, which might cause confusion. For example, when Fight for the Future launched a campaign last week urging Zoom not to adopt emotion AI in its videoconferencing software, the organization referred to both terms synonymously. It updated its campaign Thursday after this story was published.

“Sentiment analysis, like facial recognition in general, is inherently biased,” wrote the group. “These tools assume that all people use the same facial expressions, voice patterns, and body language—but that’s not true.”

The thing is, sentiment analysis typically has nothing to do with facial data. In fact, an important distinction between sentiment analysis and emotion AI is in the data sources these technologies use to generate their conclusions.

Sentiment analysis and words

Sentiment analysis tools mine text to gauge people’s opinions or attitudes toward something. Since the early days of social media, sentiment analysis and social media monitoring software providers have categorized the text in public posts, tweets and product reviews, analyzing their content in an attempt to determine what social posts say about products, retailers, restaurants or even politicians.

What do people think about a new Oreo filling flavor or President Biden’s latest initiative? Sentiment analysis offers clues.

In essence, sentiment analysis is about language, said Nandita Sampath, a policy analyst with Consumer Reports focused on algorithmic bias and accountability issues. “Sentiment analysis, in my opinion, is more analyzing tone from either text or speech,” she said.

Emotion AI and the face

Even though emotion or affect AI attempts to detect people’s sentiments, it goes about it in a different way and uses forms of data that classic sentiment analysis does not. While sentiment analysis is all about words and text, emotion AI typically is about the face and facial expressions.

Rana el Kaliouby, co-founder and CEO of emotion AI provider Affectiva and a longtime researcher in the field, agreed. “Sentiment analysis is usually text-based or word-based analysis,” she told Protocol.

Instead, el Kaliouby said, emotion AI analyzes facial expressions and sometimes incorporates other signals such as vocal and even physiological data. Technology she helped develop for Affectiva, now part of driver-monitoring AI company Smart Eye, was built using data representing millions of faces from people in 75 countries.

“Obviously, you can infer someone's emotion from tone, but emotion or affect recognition is more about analyzing someone's physical characteristics,” said Sampath, who said she has defined emotion recognition as AI that attempts to predict emotions in real time based on someone’s faceprint. Sometimes emotion AI might even look to other forms of biometric data, such as a person’s gait, she said.

Because emotion AI typically relies on using computer vision to capture and recognize facial imagery, it is often referred to in relation to facial recognition.

Indeed, in her discussion of emotion AI in her 2021 book “Atlas of AI,” Kate Crawford, an AI ethics scholar, research professor at USC Annenberg and a senior principal researcher at Microsoft Research, wrote: “Whereas facial recognition attempts to identify a particular individual, affect detection aims to detect and classify emotions by analyzing any face.” She explained that the immense volume of facial imagery gleaned from social media platforms has helped fuel AI that aims to detect emotions.

Sentiment analysis as a form of emotion AI

Nazanin Andalibi, a doctor of information studies and an assistant professor at the University of Michigan School of Information who studies AI used to detect emotion, agreed that there are distinctions to be made between sentiment analysis and emotion AI, and that concerns around validity or bias may be more or less pronounced depending on what data sources are used and what is being measured.

However, she sees deeper connections between sentiment analysis and emotion AI. In fact, she considers sentiment analysis using text to recognize what she calls “affective phenomena” to be a form of emotion AI, and more broadly a tool in affective computing systems.

“One of the critiques I have of existing discourse around emotion AI is that there is so much focus on facial recognition,” Andalibi said, pointing to other affective computing systems intended to detect emotion that use data including text, social media data and other computing behavior data, as well as biometric data such as voice and facial data.

While she said she believes facial recognition technology is “problematic” and “terrible,” she adeded, “One reason I am concerned about just focusing on problems with the face or voice is that this may support stakeholders — like those purchasing and deploying these technologies, [such as] regulators, technologists or other actors — to move away from the collection of facial or voice data and simply shift to other sensitive data types without truly addressing their fundamental harmful implications, even if and when there are no bias, validity or accuracy concerns.”

The controversy around facial data

Even though the goal of these computing systems — to understand how people feel — is the same no matter what their data inputs, many people see very important distinctions between the words we write or speak and the expressions our faces make. While interpreting the sentiments of what people write or say has its own set of problems (sarcasm, anyone?), sentiment analysis of language has not been subject to the intense level of criticism that emotion AI using facial expression data has.

The validity of emotion AI using facial expressions to gauge someone’s feelings has been seriously questioned, and often raises ethical concerns. Not only do some researchers believe the ways people express emotions such as joy, anger, fear and surprise vary across cultures and situations, but people often do not consciously project what they are thinking or feeling through their facial expressions. In contrast, people choose what they post online and what they say.

Indeed, what others might interpret from someone’s facial expressions can be quite different from what that person is actually feeling. In particular, neurodivergent people might express emotion in ways that can be inaccurately interpreted by other people or emotion AI.

As emotion AI is incorporated into more and more everyday tech, the drumbeat against it is growing louder.

In 2019, the AI Now Institute called for a ban on the use of emotion AI in important decisions such as in hiring and when judging student performance. In 2021, the Brookings Institution called for it to be banned in use by law enforcement, noting: “There is insufficient evidence that these technologies work reliably enough to be used for the high stakes of law enforcement. Even worse, they threaten core American principles of civil liberty in a pluralistic society by presuming that facial movements, physical reactions, and tone of voice can be evidence of criminality.”

Most recently, in its open letter to Zoom asking the company to nix potential plans to use emotion AI, Fight for the Future wrote: “The way we move our faces is often disconnected from the emotions underneath, and research has found that not even humans can measure emotion from faces some of the time. Why add credence to pseudoscience and stake your reputation on a fundamentally broken feature?”

This story was updated to reflect that Fight for the Future changed the wording of its campaign after this story was published.


Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.


Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories