The drumbeat against emotion AI is getting louder.
On Wednesday more than 25 human and digital rights organizations including the American Civil Liberties Union, Electronic Privacy Information Center and Fight for the Future sent a letter to Zoom demanding the company end potential plans to incorporate emotion AI features in its software. The letter comes in response to reporting in Protocol in April highlighting Zoom’s consideration of incorporating AI in its virtual meeting software to detect and analyze people’s moods and emotions.
“As a leader in the industry, [Zoom] really has the opportunity to set the tone and the pace with a lot of new developments in video meetings, and we think it’s really critical that they hear from civil rights groups about this,” said Caitlin Seeley George, campaign director at Fight for the Future, a digital rights group that launched the campaign against Zoom’s possible use of emotion AI in April.
“This software is discriminatory, manipulative, potentially dangerous and based on assumptions that all people use the same facial expressions, voice patterns, and body language,” wrote the groups in the letter sent on Wednesday to Eric Yuan, founder and CEO of Zoom, demanding the company end plans to incorporate emotion AI in its software features.
Emotion AI uses computer vision and facial recognition, speech recognition, natural-language processing and other AI technologies to capture data representing people’s external expressions in an effort to detect their internal emotions, attitudes or feelings.
“Zoom’s use of this software gives credence to the pseudoscience of emotion analysis which experts agree does not work. Facial expressions can vary significantly and are often disconnected from the emotions underneath such that even humans are often not able to accurately decipher them,” they continued.
“It feels like they are a company that is open to considering all these factors,” Seeley George said of Zoom. She said Zoom has not responded to the organization’s requests to discuss the issue. The company also has not responded to Protocol’s requests to comment.
AI-based features for assessing people’s emotional states are showing up in virtual classroom platforms and technology used in vehicles to detect driver distraction, signs of drunkenness and road rage. Behind the scenes, companies that produce synthetic images and video data are supplying raw materials used to train emotion AI and related systems, but sometimes those companies are removed from the end uses of the data they provide.
The validity of emotion AI has been seriously questioned, and often raises ethical concerns. Some research shows the ways people express emotions such as happiness, anger or surprise vary across cultures and situations. What others might interpret from someone’s facial expressions can be different from what that person is actually feeling. In particular, neurodivergent people might express emotion in ways that can be inaccurately interpreted by other people or emotion AI.
Emotion AI has come under fire in recent years. In 2019, the AI Now Institute called for a ban on the use of emotion AI in important decisions such as hiring and when judging student performance. In 2021, the Brookings Institution called for it to be banned in use by law enforcement.
Some advocates pushing to stop the use of emotion AI worry that people will become increasingly comfortable with it if it is built into more everyday tech products. “People will get desensitized to the fact that they are under constant surveillance by these technologies,” Seeley George said.
“The opportunity for mission drift, for data sharing and for unknown consequences is just so high,” she said, noting that data assessing people’s behaviors and emotions could be shared with other corporations, government agencies or law enforcement.